- An on-line evaluation of current algorithms
- Many stereo datasets with ground-truth disparities
- Our stereo correspondence software
- An on-line submission script that allows you to evaluate your stereo algorithm in our framework
We grant permission to use and publish all images and numerical results on this website. If you report performance results, we request that you cite our paper . Instructions on how to cite our datasets are listed on the datasets page. If you want to cite this website, please use the URL "vision.middlebury.edu/stereo/".
|||D. Scharstein and R. Szeliski.
A taxonomy and evaluation of dense
two-frame stereo correspondence algorithms.|
International Journal of Computer Vision, 47(1/2/3):7-42, April-June 2002.
Microsoft Research Technical Report MSR-TR-2001-81, November 2001.
|||D. Scharstein, R. Szeliski, and R. Zabih.
A taxonomy and evaluation of dense two-frame stereo correspondence
In Workshop on Stereo and Multi-Baseline Vision (in conjunction with IEEE CVPR 2001), pages 131-140, Kauai, Hawaii, December 2001.
Other online stereo benchmarks:
- Robust Vision Challenge
- KITTI Stereo 2012 evaluation
- KITTI Stereo 2015 evaluation
- ETH3D 2-view stereo benchmark
- Heidelberg HD1K Stereo benchmark
Support for this work was provided in part by NSF CAREER grant 9984485 and NSF grant IIS-0413169. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.