Disparity and Depth Image for Stereo Vision

Stereo processing using stereo_image_proc

Stereo processing: The stereo_image_proc node performs the duties of image_proc for a pair of cameras co-calibrated for stereo vision. It also uses stereo processing to produce disparity images and point clouds.

Disparity Image can be found using different algorithms, such as block-matching algorithm.

Step 1: Obtain left and right camera calibration yaml files

Do this step if you have not gotten a yaml file from the camera calibration process.

ros2 run camera_calibration_parsers convert left.ini left.yml

ros2 run camera_calibration_parsers convert right.ini right.yml

Step 2: Create launch file to run stereo_image_proc

Luckily, image_pipeline provides a stereo_image_proc package that contains the launch file “stereo_image_proc.launch.py” that creates the following:

  • debayer node and rectifier nodes to perform processing for left and right cameras

  • disparity node that performs block matching to create disparity image (publishes to /disparity)

  • pointcloud node that subscribes to /disparity and publishes pointcloud2 message which can be used to create 3D maps at a later stage

stereo_image_proc.launch.py: launches stereo processing nodes + disparity node + pointcloud node

Step 3: Depth Processing using depth_image_proc

Depth processing: depth_image_proc provides nodelets for processing depth images (as produced by the Kinect, time-of-flight cameras, etc.), such as producing point clouds.\

depth_image_proc provides basic processing for depth images, much as image_proc does for traditional 2D images

Some Examples:

Last updated