Middlebury Stereo Evaluation - Version 3
by Daniel Scharstein and Heiko HirschmüllerHere you can download input files, ground-truth disparities, and the evaluation SDK for the new stereo benchmark, as well as upload your results. See also the description of new features.
Data
We provide a training set and a test set with 15 image pairs each, mainly taken from the 2014 datasets. The training set also contains ground-truth disparities in PFM format. The images are available in three resolutions, full (F), half (H), and quarter (Q). You should generally use the largest size your algorithm can handle, though sometimes a smaller resolution can yield better results. Note that while you can evaluate your results with half or quarter-size GT using our SDK, the "official" evaluation is always at full resolution, and we will upsample your results if necessary. Most of the datasets have realistic "imperfect" rectification with residual y-disparities, for which we also provide GT for the training set. Please see our GCPR 2014 paper for more information about the datasets.
# of training images |
# of test images |
Full resolution up to 3000 x 2000 ndisp <= 800 |
Half resolution up to 1500 x 1000 ndisp <= 400 |
Quarter resolution up to 750 x 500 ndisp <= 200 |
|
Input data im0.png, im1.png, calib.txt |
15 | 15 | MiddEval3-data-F.zip (341 MB) | MiddEval3-data-H.zip (105 MB) | MiddEval3-data-Q.zip (30 MB) |
Ground truth for left view disp0GT.pfm, mask0nocc.png |
15 | — | MiddEval3-GT0-F.zip (194 MB) | MiddEval3-GT0-H.zip (51 MB) | MiddEval3-GT0-Q.zip (13 MB) |
*Ground truth for right view disp1GT.pfm, mask1nocc.png |
15 | — | MiddEval3-GT1-F.zip (194 MB) | MiddEval3-GT1-H.zip (51 MB) | MiddEval3-GT1-Q.zip (13 MB) |
*Ground truth y-disparities disp0GTy.pfm, disp1GTy.pfm |
15 | — | MiddEval3-GTy-F.zip (180 MB) | MiddEval3-GTy-H.zip (51 MB) | MiddEval3-GTy-Q.zip (15 MB) |
Code
Our evaluation SDK consists of C++ code and shell scripts. It is currently only available for Unix / Linux (but also works under Windows using cygwin). It allows (1) running algorithms on all datasets; (2) evaluating the results for the training set; and (3) creating a zip archive of all results for submission to the online table. If you prefer to create the zip archive manually or using your own scripts, here is a description of the expected upload format. In order to use the SDK with your stereo algorithm, you will have to produce floating-point disparities in PFM format. The SDK includes sample C++ code for saving in PFM format. It also contains a sample stereo algorithm, Libelas, modified to write PFM disparities, as an example.
Download the SDK here: | MiddEval3-SDK-1.6.zip | README.txt | CHANGES.txt |
We also provide cvkit, a collection of visualization and conversion tools designed to work with PFM files and calibration files in our directory structure, in particular the fast image viewer sv and 3D viewer plyv. These are highly valuable debugging tools, and are provided for Linux and Windows platforms. In case your algorithm already saves disparity maps in formats such as 16 bit PGM/PNG/TIFF or floating-point TIFF, you can also use cvkit's imgcmd program to convert to PFM directly (more detail is found in the SDK documentation).
Eric Psota has contributed MatlabSDK-v2.zip (updated March 2017), a small collection of Matlab scripts for interacting with the MiddEval3 files. The scripts include a simple stereo matcher, as well as a PFM reader/writer.
Submit
After you adapt the SDK to run your algorithm or create a zip file manually, you can upload your results. You can upload and evaluate your results on the training set as many times as you want. As in the previous version of the stereo evaluation, you will see how your results compare to all other submitted results on the training set in a temporary table. Once you have a final set of results, you can upload your results on both training and test sets and request publication of both sets of results. In order to prevent fitting to the test data, we only allow this once per method, and you won't be able to see the test results until after they are published. Please only upload "final" results (either published or about to be submitted to a conference or journal). Note that we only allow one result per publication in the table. If you have multiple algorithm variants, you may evaluate them on the training set, but you should select and upload only one result for publication.
Upload your results |