Optimizing point cloud production from stereo photos by tuning the block matcher

In this last post in my series on using a homemade stereo camera to produce 3d point clouds, I’ll show you how to improve your 3d point clouds in order to get optimal results. I’ll also show you where your passive stereo camera will have the best chances of producing good point clouds. With these steps, you should be able to produce and interactively view colored 3d point clouds. You’ll also be able to generate them in your own Python programs so that you can further manipulate them.


If you haven’t done it yet, get ready to go. Here are the previous steps:

Once this is finished, you can start producing 3d point clouds immediately, but you might be unsatisfied with the results. You can get better results by tuning your block matching algorithm to produce better disparity maps, which are the prerequisite for the point cloud you want at the end.

Tuning the block matcher to optimize results

Passive stereo vision works by looking at two pictures of the same objects and searching for matching points that can be found in both images. I say passive rather than active stereo vision, because active stereo vision projects a pattern onto the image and then computes how that pattern is deformed when it hits the real objects to create a 3d model. This method is a bit more robust, but it’s not what we’re working with, so we’ll ignore it for now.

Passive stereo vision works best if there are lots of features that you can easily tell apart. If the features on your images are too homogeneous, so that it’s hard to say where they match and where they don’t, the algorithm won’t work well. You can notice the same phenomenon with your own eyes: If you hold a blank sheet of paper in front of your face and try to figure out its distance while moving it back and forth, it’s difficult, unless you use the size of the page or the position of your body as a reference. Unless you teach your computer to do so, it won’t be able to estimate distance to objects by their size, because it doesn’t know how big they are in 3d space. Also, if the features are only visible through one camera, the algorithm will have problems. You can see this problem if you’re trying to judge the distance to something that’s only visible through one eye, or if you’re trying to judge the distance to a curved, reflective surface that reflects differently at each of your eyes. Finally, although you want overlap in your pictures, you need disparity in order to detect 3d structures. Stereovision loses depth perception with distance – you’ll notice that with your eyes, it becomes increasingly difficult to accurately gauge the distance to objects as they move farther away. Of course, your brain can use a whole lot of extra information in order to make its distance estimation more robust, but your computer can’t – at least not right at this moment.

That means that good stereo image pairs have rich textures with lots of detectable features that are visible on both pictures, and that each physical feature imaged on each picture looks roughly the same in both. Imaged features should not be too far away from the camera rig to have detectable disparity. This may sound a bit confusing, but I’ve got some examples coming up.

The other important factor in producing good point clouds from stereo images is making sure that the algorithm is well-adapted to your camera setup. Block matching algorithms have a lot of different settings. We’ll be working with a semi-global block matching algorithm, which has a lot of settings that affect its performance on different image pairs and with different setups. The goal is to find settings that work well with your stereo rig over a wide variety of images.

The BMTuner class

I’ve implemented a graphical way of tuning block matching algorithms on the BMTuner class in StereoVision’s UI utilities. It takes an instance of the BlockMatcher class and comprehends what parameters can be set, then instantiates trackbars so that the user can set them. The callback functions for the trackbars are generated dynamically using decorators. The BMTuner also requires a rectified image pair, which is passed to the BlockMatcher to obtain a new disparity map every time the user sets a new parameter.

Of course, you can work with this in your own programs, but I’ve also written a program that lets you adjust the parameters for your block matchers using this class and save the last options to file so that you can reuse them for producing point clouds.

Tuning your algorithm

Tuning your block matcher is pretty easy:

me@localhost:~> tune_blockmatcher --help
usage: tune_blockmatcher [-h] [--use_stereobm] [--bm_settings BM_SETTINGS]
                         calibration_folder image_folder

Read images taken from a calibrated stereo pair, compute disparity maps from
them and show them interactively to the user, allowing the user to tune the
stereo block matcher settings in the GUI.

positional arguments:
  calibration_folder    Directory where calibration files for the stereo pair
                        are stored.
  image_folder          Directory where input images are stored.

optional arguments:
  -h, --help            show this help message and exit
  --use_stereobm        Use StereoBM rather than StereoSGBM block matcher.
  --bm_settings BM_SETTINGS
                        File to save last block matcher settings to.

Note that StereoSGBM is the default algorithm, as I have yet to be impressed with the results using OpenCV’s StereoBM. Also remember to use the –bm_settings flag if you want to save the last used settings to a file.

I use the program any time I’ve rebuilt my camera and thus have to recalibrate it. First, I take a few stereo images pair of scenes that are fairly easy to reconstruct, so that I’ll be able to judge the resultant disparity maps well. I adjust the block matcher settings in the GUI until I’m satisfied with the disparity map, then I hold down a key with the focus on the disparity map until the program jumps to the next picture. After all the images have been analyzed, the most common settings are reported. Then I run the program again, adjusting the BlockMatcher to use the most common settings from the last run through and save configuration to disk so I can reuse it later.

Producing 3d point clouds

After you’ve tuned your block matcher, you can take the settings and use them to produce better point clouds. This is done with images_to_pointcloud. Run it like this:

me@localhost:~> images_to_pointcloud --help
usage: images_to_pointcloud [-h] [--use_stereobm] [--bm_settings BM_SETTINGS]
                            calibration left right output

Read images taken with stereo pair and use them to produce 3D point clouds
that can be viewed with MeshLab.

positional arguments:
  calibration           Path to calibration folder.
  left                  Path to left image
  right                 Path to right image
  output                Path to output file.

optional arguments:
  -h, --help            show this help message and exit
  --use_stereobm        Use StereoBM rather than StereoSGBM block matcher.
  --bm_settings BM_SETTINGS
                        Path to block matcher's settings.

The resultant point cloud can be viewed with Meshlab.

A discussion on results

I used this program to tune my block matcher for five test images and then used the block matcher settings to produce point clouds for the training images and a set of independent images. I find the results quite I’ve explained it using a video, since it’s easier to explain the results visually:


My name’s Daniel Lee. I’m an enthusiast for open source and sharing. I grew up in the United States and did my doctorate in Germany. I've founded a company for planning solar power. I've worked on analog space suit interfaces, drones and a bunch of other things in my free time. I'm also involved in standards work for meteorological data. I worked for a while German Weather Service on improving forecasts for weather and renewable power production. I later led the team for data ingest there before I started my current job, engineering software and data formats at EUMETSAT.

Tagged with: , ,
Posted in Uncategorized
45 comments on “Optimizing point cloud production from stereo photos by tuning the block matcher
  1. Guanglang Xu says:

    Do you think I can use this to retrieve a 3d shape for pretty tiny particles (um) images taken by microscopy?


    • erget says:

      Principally, you can use this for anything that you see if you have a calibrated stereo rig. My only suggestion would be to try it out – if you have a stereo microscope set up, it should work. There’s nothing that fundamentally separates small-scale from large-scale objects unless you get down to particles smaller than the wavelengths of the radiation you’re using to image them… So unless you’re talking about an electron microscope I’d at least give it a try if setting up a stereo rig is plausible.


  2. Sabuj says:

    Hi Daniel,

    I am really amazed with your work. Specially after seeing the video you posted in youtube.

    I might need a favour form you. Will you please email me at aadi6600@gmail.com

    I would really appreciate this.



  3. alpha says:


    I read your code in calibration.py and found that
    you wrote
    “ # This is replaced because my results were always bad. Estimates are
    # taken from the OpenCV samples.”
    width, height = self.image_size
    focal_length = 0.8 * width
    calib.disp_to_depth_mat = np.float32([[1, 0, 0, -0.5 * width],
    [0, -1, 0, 0.5 * height],
    [0, 0, 0, -focal_length],
    [0, 0, 1, 0]])
    My question: Does the function cv2.stereoCalibrate in opencv work fine or any other reason ?


    • erget says:

      Hi Alpha,
      My results with using that function returned unsatisfactory results, which is why I chose to use this idealized disparity to depth matrix rather than use the one that OpenCV gave me. With this matrix I was able to produce good results and if you use my package it’s the one you’ll work with automatically too.


  4. alpha says:

    I want to use stereo image to measure the distance from an object to other road. Any suggestion?


    • erget says:

      Try it the software and see for far you get! However, for long distances stereo vision is not necessarily the best method: the accuracy decreases rapidly with distance between camera and measured object.


  5. alpha says:


    As far as I understand I can use stereoCalibrate, StereoRectify to get the matrix Q, then use reprojectImageTo3D to generate a point cloud file. Then I carry out the measurement on the point cloud, is it correct this program flow to take measurement? If it is, you mentioned that matrix Q, that you use the “idealized disparity”, does it affect the accuracy in the measurement or any suggestions?


    • erget says:

      Hi Alpha,
      You can of course do all that manually, but the reason I wrote the package I present in this post is to abstract those details away from the user. If you use the StereoVision package, all you have to do is take a bunch of pictures with a calibration board. Also, producing point clouds in which you can measure distances is a 1 step deal that you don’t have to do on your own.

      Why don’t you read the post entirely? You’ll find it already answers your questions, especially if you read the other posts in the series (particularly https://erget.wordpress.com/2014/02/28/calibrating-a-stereo-pair-with-python/).

      As far as accuracy is concerned, everything affects accuracy – the distance, your cameras, the stereo rig, your calibration equipment, the calibration pictures, the lighting. There’s no way of quantifying a single factor in isolation. So just try it out 🙂 And read the posts, it helps to be informed.

      Best, Daniel


  6. alpha says:

    Daniel, Actually,I can produce q matrix from a sequence of stereo images chessboards,and generate point cloud file using the parameters generated from stereoreticfily., and reprojectto3D function in cv2. However, I use the meshlab to open it,but the 3D point cloud display is bad. Could not take measurement from it. I doubt if it is possible to create a good point cloud file from opencv to take measurement in meshlab.please give me a good point cloud file to see


    • erget says:

      I don’t currently have my stereo rig up and running so I can’t produce any point clouds. If you look through my posts in this subject – they’re all linked together as a series – you’ll see both pictures as well as a video of several point clouds, as well as explanations of what makes them good or bad.
      Furthermore, if you’re using your own software rather than mine I can’t really troubleshoot it. I’m not sure if you have a bug somewhere or not, and although I certainly don’t have perfect software, I at least know it well because I wrote it and have access to the sources.


  7. alpha says:

    Daniel, I used command “pip install sterevision” to install. It show me success. But, how do I start the program ? THanks


  8. alpha says:

    Daniel, more information ,I install it at windows7


    • erget says:

      I do get back to people that write and I’m happy to help you, bit realize that I do this completely in my free time so you can’t necessarily expect an answer immediately.
      If the command using pip installed the package, the binaries should probably be callable from the command line. Have you tried that?
      If typing the commands or copying and pasting them from the blog doesn’t work – that means, if Windows says it doesn’t know the command – then pip installed them to a directory that isn’t in your search path.
      I am absolutely not familiar with Windows, and this us actually a matter of configuring your search path, which has nothing to do with my software. It’s a system thing. Not knowing where pip installed binaries in Windows, nor how to add directories to your search path in Windows, I would try the following things in your place:
      1. Find out where pip installed the files. This should be near the folder where your python packages are installed, that means, where the files from the python packages are
      2. As that folder to your search path.

      I’m not sure how to do either of these in Windows, but they’re probably pretty simple. If you Google for it you should find several answers very quickly. Also, once you do it you’ll never have to do it again, and if you plan on writing Python you’ll definitely want to have this taken care of so you can install packages easily on the future.


  9. alpha says:

    Daniel, thx could u tell me what is the name of the executable program that can start your stereovsion program so that I can find it.


  10. alpha says:

    Daniel, also the command to start it


    • erget says:

      PLEASE read the blog before you write me any more questions. I understand that this is new for you, but that only makes it more important to read the instructions. That’s why I wrote them – it doesn’t make sense for me to repeat the instructions that are already all over this blog in the comments.
      On the command line, the command to start a program is the name of the program. You’re wanting to calibrate a stereo rig. Read the link on calibrating. Look at the top of this post and you’ll see a link that says “Calibrate the cameras.” It contains instructions on doing just that. Read that and it will answer all of your questions – except for questions specific to things like working on the command line or installing things in Windows, which are outside of the scope of this blog.
      I would suggest you read the entire series of posts, starting at how to build a stereo rig. They’re all linked to each other and it shouldn’t take you more of an hour. Doing that will answer your questions and it will be faster than asking me about it. If you still have questions afterwards, I’m happy to answer them. So help yourself by reading the material – it’ll be faster than asking me questions and I’ll be in a much better mood when I write back to you next time.


  11. alpha says:

    Daniel, As an IT profession we are, we used to have an user guide that we developed our program,right ? Do you have any userguide of your stereovision computer program?


    • erget says:

      Hi alpha,
      Yes. The use guide is this blog. If you’re interested in program documentation for developers, see the link to the program documentation that you can reach via github.


      • teja says:

        Your work is very very helpful !

        Can u please solve this: When i run “tune_blockmatcher.py” i got error

        Traceback (most recent call last):
        File “tune_blockmatcher”, line 34, in
        from stereovision.blockmatchers import StereoBM, StereoSGBM
        File “/usr/local/lib/python2.7/dist-packages/stereovision/blockmatchers.py”, line 111, in
        class StereoBM(BlockMatcher):
        File “/usr/local/lib/python2.7/dist-packages/stereovision/blockmatchers.py”, line 117, in StereoBM
        “stereo_bm_preset”: cv2.STEREO_BM_NARROW_PRESET}
        AttributeError: ‘module’ object has no attribute ‘STEREO_BM_NARROW_PRESET’


  12. teja says:

    THIS ERROR occurs most Frequently for me:

    Traceback (most recent call last):
    File “try.py”, line 7, in
    stereo = cv2.StereoSGBM(cv2.STEREO_BM_BASIC_PRESET,numdisparities=1000, SADWindowSize=11)
    AttributeError: ‘module’ object has no attribute ‘StereoSGBM’


    • erget says:

      What version of OpenCV are you using? I suspect that the OpenCV devs changed some of their public facing code so that StereoSGBM is no longer available. I would have to update and check myself, but since I’ve run up against this issue in the past, I’m guessing that it’s the cause. I’ll send the version info of what I have installed on my machine when I get home and have access to it.


    • erget says:

      Hi there,

      I can’t say a whole lot about the error you’re showing – Python is saying it can’t find the object StereoSGBM in the module cv2.

      I definitely do have that function in my version of openCV:
      >>> import cv2
      >>> cv2.StereoSGBM
      <built-in function StereoSGBM>
      >>> cv2.__version__

      If you’re using another version, it might not be there.

      I think it’s great that you’re experimenting on your own, but you might have more luck using the code I provide, which takes care of all the details for you.



  13. Farrukh Khan says:

    Hi Daniel. I talked you on youtube and now I’m here. Sir, I just wanted your help you done this great project in python However, I need your help regarding this in c++ can you help me ?


    • erget says:

      My suggestion would be to read the documentation on OpenCV and design your code accordingly. Without a more specific question, there’s no way I can help you out.


  14. M. Fisher says:

    Have you ever published on this (ie is there any academic journal publications that can be sited?)


  15. Elon Terrell says:

    Hi Daniel,

    I followed your workflow and am attempting to create point clouds of a stereo image of my books and computer. However, when I attempt to view my point clouds, all I get is this:

    Any thoughts on what I might be doing wrong?


    • erget says:

      Hi Elon, it looks like something went wrong with the block tuner. Try playing around with that until you get results that look like brightness maps of the depth from your camera.

      I’m surprised at your poor results. I have an evaluation that I’ll be releasing at some point in the near future that shows using the system for large-scale surfaces and it looks very promising. I would conjecture that your bad results are due to problems either in calibration or in the block matcher. Your point cloud looks like it was generated from images with a lot of pretty homogeneous surfaces, which is difficult. The walls behind your screens will be hard to match, as well as the keyboard due to low contrast. To do a sanity check, you should try a more structured surface, like a well-lit blanket or anything else with geometric patterns on it. There the algorithms tend to perform well and you should be able to narrow down where errors are coming from.


      • Elon Terrell says:

        Hi Daniel,

        Thank you much for your reply. I attempted to follow your advice, recalibrating the cameras and attempting to retune the block matcher using texture-rich images. I am still getting the same poor result as before.

        This is the left image:

        This is the right image:

        This is the best that I was able to tune block matcher:

        And this is the resulting point cloud:

        Any help or guidance that you can provide would be much appreciated.



      • erget says:

        Hi Elon, sorry, but your guess is as good as mine.


  16. Elon Terrell says:

    Understood. Would you be willing to share the chessboard calibration images and and some of the tuning images that you utilized in your Youtube video? Having a working example would be extremely helpful for me to troubleshoot what I might be doing wrong.

    (I hope this is not too big of an ask.)


    • erget says:

      I would be, but unfortunately I don’t have those images any more. I did receive a number of images in an evaluation carried out by a third party that I’ve been given permission to publish, but as yet I just haven’t had the time to do so. I’m really swamped until June and so that would be the earliest date on which I could publish those, which do provide pretty good results.


  17. kiddiousrodi says:

    Your program and documentation are very helpful. Thank you for putting this together.

    The block tuner display window has an error that doesn’t make the x-axis scalable. The window expands to the left and right edges of the screen, but the y-axis is scalable.

    Thanks again!


  18. Waqar Rashid says:

    I couldn’t make this library work with opencv3 so I had to revert to opencv2.4.9 now it seems to be working. The only problem that I face now is that while using the tune_blockmatcher the window is very tall. The slidebars are in one column and I can only see the slide bars and a small portion of picture. I am had never worked with GUIs so I don’t know much about how to move the things around to make space for the picture. I hope you can give me some directions. And thanks for your work. It made my task very easy. I am developing a collision avoidance system which will use stereovision to measure to distance to close objects.


    • erget says:

      I’m glad it’s working well for you! I’d be very interested in hearing more about your project.
      I’ve never had the window for “tune_blockmatcher“ be too large, but it’s instantiated through OpenCV, so maybe it’s interacting with your window manager in a way I haven’t seen before. Either that or you’re using a very small screen.

      If your screen is very small, you might consider taking the calibration pictures from your stereo rig and tuning your block matcher using them on another machine. The settings are machine independent so if you find the optimal settings you can always deploy those settings on your target system.

      Otherwise you should be able to fiddle with the settings by implementing your own version of the “BMTuner“ class, which is in the “ui_utils“ module. The window is instantiated here:
      This call could be changed to allow the user to resize the window –
      by passing the “WINDOW_NORMAL“ flag.

      If you’d be willing to test this on your machine and get back to me on the results, I can create a branch on GitHub that you can clone and play around with. Currently I don’t have a stereo rig so I don’t touch the code if I don’t have to, since I like to only push out tested code. But I could add the appropriate flag and if you do the testing and it works as expected I’d roll that into a new release and push that to PyPI.


  19. Waqar Rashid says:

    Hello erget,
    From a few days I am facing another problem which seems very strange. The problem is:

    While tuning my cameras using the ‘tune_blockmatcher’ script in bin directory, it shows the windows and disparity properly. But the map is update only on the first change of each trackbar. For example when I run the script for stereobm and change the parameter window size, the disparity updates. When I change it again, nothing happens. After that if I change Stereo_BM_preset to a applicable value, it updates and then further it won’t update on changing that parameter. In the same way, I can only update it once using search range.

    One more observation is that, it stops working only after an update is done using that parameter, for example if a value of parameter is such that the algorithm is not accepting it, it won’t do anything but it will only update once upon getting first acceptable value.

    I am using Python 2.7.12 and opencv This problem is occurs on both raspbian and on ubuntu 16.04. What do you think is the reason for this problem?


    • erget says:

      Not sure, and unfortunately I don’t have a live install on any of my machines at the moment. Sounds like a GUI problem from OpenCV though. Perhaps test working with another OpenCV test GUI just to see if you can observe that the window updates properly there?


  20. aatmadeep says:

    Hoping that this thread is still active, Thank you, Daniel, for your amazing work.
    Can you please help me with the following errors:

    -System: Ubuntu 17.10(Artful) with OpenCV 3.3.1

    -Note: I’ve copied the bin files from updated stereoVision (Douglas Gibbons) to the pip installation stereoVision folder, As I was getting the STEREO_BM_NARROW_PRESET error and tune_blockmatcher and image_to_pointcloud weren’t running.

    Q1. tune_blockmatcher does not show disparities, only trackbars.
    Doubt: It rectifies the pair of images I’ve given to it right? If not, How do I generate a rectified pair of images?

    Q2. tune_blockmatcher does not save settings (I don’t even know whether it’s generating them or not) when ran with –bm-settings argument. What could be the problem?

    Q3. I need to implement this Stereo setup in a Quadcopter for a small project demonstration. Can you suggest something on a hardware level for robustness? It needs to generate a 3D map of its surroundings.

    Once again, Thanks a lot, Daniel.
    Atmadeep Arya.

    Liked by 1 person

    • erget says:

      Hi Atmadeep, this sounds like a really interesting project! I have my doubts that stereo vision is the right way to go for SLAM on a quadcopter because the hardware required for it is quite heavy, but I’d be interested in hearing what kind of results you have. Try experimenting with writing your own Python script which uses stereovision to produce and save disparity pictures first of all, this is something you’ll need to be able to do in any case and it will tell you if you’re having errors generating the images, or if the error is in the GUI. If you have more information there I can provide more information. Good luck! -Daniel


  21. Roman Skripko says:

    Hello Daniel!
    Thank you for that work.
    My name is Roman Skripko, I’m mechatronics student working on a project with a special objective – I need to get a point cloud with Z-axis error ~0.5 mm (and with minimum percent of noise, yeah I’m a dreamer) but the scene is controlled by me – it’s a small box (0.5×0.5 m) which I can light up and paint as I want. That box contain the measurable objects, depth map is built from Top view. Distance from cameras to scene can be controlled too. So my questions are:
    1) Should I use that method with stereo pair and opencv algorithms, or it’s too hard or even impossible to get that accuracy?
    2) If it is than what should I look for – I know about a method with thin laser line and 1 camera, scene is moving by the engine, I take a hundred pictures while its moving through laser and than calculate the hight of points that are under laser so in the end I should get my point cloud. What do you think about it?
    3) If not sure can you please give me advice where should I go with my questions.
    Best regards,
    Skripko Roman.


    • erget says:

      Hi Roman, there’s no physical limitation on the accuracy which can be achieved with stereo vision – it’s dependent, though, on the distance of the objects you view from the camera, the resolution of the cameras, and the accuracy of the rig calibration. If you are using high-quality cameras with good resolution, there’s no reason why this shouldn’t be impossible. That being said, you do have a high requirement on accuracy, so it might be expensive to achieve what you want to achieve.

      Using a laser line and one camera will definitely not give you the results that you want, unless you have 1) both a camera whose pixels have a horizontal resolution of much smaller than your target accuracy (0.5mm) and 2) a laser whose footprint on the object you are viewing is also much lower than that. Achieving these both are difficult. The technique you suggest involves having a laser at one point and moving the object through its projection field – the accuracy you need would also need to apply to whatever tools you’re using to move the object through the laser field, etc., etc., etc. – I think this will be very difficult to achieve.

      That being said, it’s still an ambitious accuracy goal using any cheap equipment, no matter what technique you’re using. Best of luck to you!


  22. Praneeth Varma says:

    I have seen couple of other comments here, will try to use with opencv 2.4.9 and share details


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

From the archive