I was recently pleased and surprised to be contacted by the manufacturer of the StereoPi, a shop that I’ve had contact with a few times in the past and have always found competent and ambitious. They were interested in sending me an advance copy of their StereoPi Deluxe Kit – an open source, open hardware stereo kit with everything you need to get started on some awesome experiments, prototypes, and even real applications. The ask? Write a review, regardless of its content.
Why a stereo rig?
Short and sweet: If you like the real world, you probably like it in 3D, so if you want to image it to produce a model or drive a robot, you’ll end up needing to take sensory data and turn it into 3D images.
Stereo rigs are not the only way to do this, but the computations behind stereo 3D reconstruction is less computationally expensive than more other methods. That’s why most animals have 2 eyes next to each other – so that they can perceive the world in 3 dimensions without having to spend a lot of time thinking about it.
Stereo 3D reconstruction is used for construction, security, wildlife monitoring, navigation, art, and much more.
I like this kit because it is
- Easy to assemble, even for Neanderthals like me
- Based on Raspberry Pi, which is an awesome platform with which many people are familiar
- Fully open source – including the hardware!
- Very well documented, with lots of nice examples to get you off to a running start
What’s in the box?
I won’t restate what StereoPi has on the website, but the highlights are all the hardware you need to get started – the Pi, the StereoPi board, wide and narrow angle cameras, mounts to fit everything together, and all the cables, screws, nuts, and spacers you need. The only thing I had to acquire in order to get their first example running was a WiFi dongle because I didn’t have an Ethernet cable to hand, but even that is only necessary if you’re wanting to try out their basic streaming example.
The different mount boards pictured above are nice because you can space your cameras as you like. You can see that I chose a narrow spacing with narrow-angle lenses. The reason for this is that I was doing my tests indoors. The wider mounts are better for projects focused on longer ranges, as the distance between your cameras determines the optimal range depth sensing range. Note also that you could use this same kit to build a 360° setup if that’s what you’re into. All told it took me just a few minutes to build the thing.
It’s a snap to put the thing together and the StereoPi wiki can tell you everything you need to know about setup and configuration. This is a quick summary.
Attach the StereoPi board to the Raspberry Pi. Connect the cameras to the StereoPi. Connect the power cable. Pick the mount boards you want. Screw the hardware into the boards. Attach the boards to each other with the spacers. Download the Raspbian image and flash it onto the included SD card. Mount the SD card, configure it for your WiFi network. Fire it up.
Now, from a computer in the same WiFi network, you can access the StereoPi Livestream Playground by navigating to http://stereopi.local/ to verify that everything’s up to spec and streaming into your network!
Performance in a live demo
I also tried out a demo application provided by the StereoPi team that shows the system’s performance when computing disparity maps. These are the first steps to making a 3D model of a scene you image with your rig. If you’re doing this with a factory-built, pre-calibrated 3D camera – often with an embedded processor built in that improves the scan as it comes in – this is very smooth, and my experience is that it can be a bit cumbersome with DIY kits. While the StereoPi did not, straight out of the box, perform as well as pre-calibrated models, it was the easiest and most accurate DIY kit I’ve dealt with thus far and would work fine for producing colour 3D maps of images, as I’ve demonstrated using my own DIY kit before. I think the StereoPi is easier than what I’ve built in the past and delivers higher quality.
After assembling your rig, the next step is to calibrate it. If you’re following the example I referenced above, this is done using a chessboard printout, handily provided in the repository. I just printed it out and afixed it to a cutting board for easy handling and to keep it from bending, which would kill the calibration.
The repository also contains handy scripts for calibration image capture, pre-processing and the calibration itself. The easiest way to use it is to download the image linked from the repo and flash it onto your SD card.
Once you’re set up with your calibration board and your rig has the software ready, you can run the scripts in the order listed in the tutorial to capture calibration images…
This runs more or less automatically – after about 2.5 minutes of holding your calibration board in different positions you’ll have all the images recorded. Make sure that the chessboard is visible from both cameras for each picture, don’t have it in motion when the countdown for the next image completes, and consider using a lighter board than the one I did – olivewood is dense!
After that ordeal is done you get to see the corner point identification for each image, which is pretty cool. Image pairs that aren’t good enough are thankfully discarded automatically. After that, you can tune the block matcher so that you get good depth map results – for this you should image a few scenes containing objects of varying distance from the rig just to get a variety of conditions. This is important in order to achieve good results!
Having seen good results on the StereoPi website and knowing the folks involved, I knew that it was possible to get good results, and plus, I always think it’s more interesting to break things than have them work properly. Therefore I accepted the vanilla configuration provided by the software and tried it out on a few different scenes.
I’ve gotta say, I’m pretty impressed with the performance! Blue is close, red is far. I’m recognised very well compared to the walls behind me – and that’s a tough background, as normally featureless surfaces mean pretty poor performance for stereo rigs. You can see even more impressive performance on the image with the drone – my arm, the drone, and the chair are well-recognised, and the wall in the background can be seen receding into darker hues of blue – something that’s quite tricky for a stereo rig.
All in all this is a really nice rig. You can get up and running extremely quickly, and if you’re just starting to learn about hardware or 3D imaging I actually think it’s great that you have to assemble the kit so that you get a feel for what you’re working with. The documentation is really good, and the results have a good quality – and you can improve those base results with minimal fiddling. The team behind StereoPi is also really responsive in the case of questions.
For me, the biggest selling point is the fact that the whole thing is free and open source. There are lots of solutions out there, but most of them are proprietary. The fact that the hardware and all the software on this rig is open is a dream for educational projects or the case that you need to truly understand what you’re doing, or when you’re making a prototype that’s really special and thus not well-matched to a one-size-fits-all industrial solution. Now that I’ve got this setup I know what kit I’ll be using for my next prototypes 😉