helvipad
Pillar scores are computed during the next indexing cycle.
| Entity Passport | |
| Registry ID | hf-dataset--chcorbi--helvipad |
| License | CC0-1.0 |
| Provider | huggingface |
Cite this dataset
Academic & Research Attribution
@misc{hf_dataset__chcorbi__helvipad,
author = {chcorbi},
title = {helvipad Dataset},
year = {2026},
howpublished = {\url{https://huggingface.co/datasets/chcorbi/helvipad}},
note = {Accessed via Free2AITools Knowledge Fortress}
} π¬Technical Deep Dive
Full Specifications [+]βΎ
βοΈ Nexus Index V2.0
π¬ Index Insight
FNI V2.0 for helvipad: Semantic (S:0), Authority (A:0), Popularity (P:0), Recency (R:0), Quality (Q:0).
Verification Authority
ποΈ Data Preview
Row-level preview not available for this dataset.
Schema structure is shown in the Field Logic panel when available.
π Explore Full Dataset β𧬠Field Logic
Schema not yet indexed for this dataset.
Dataset Specification
HELVIPAD: A Real-World Dataset for Omnidirectional Stereo Depth Estimation [](https://vita-epfl.github.io/Helvipad/)
The Helvipad dataset is a real-world stereo dataset designed for omnidirectional depth estimation. It comprises 39,553 paired equirectangular images captured using a top-bottom 360Β° camera setup and corresponding pixel-wise depth and disparity labels derived from LiDAR point clouds. The dataset spans diverse indoor and outdoor scenes under varying lighting conditions, including night-time environments.
News
- [β οΈ Important Update β 21/09/2025] A minor error was identified in the Helvipad depth-to-disparity conversion formula. We have now regenerated and uploaded the correct disparity maps to the repo.
- The previous disparity maps are still included, since they were used in the experiments reported in the Helvipad paper and the DFI-OmniStereo paper.
- For new work, we recommend using the corrected disparity maps.
- [16/02/2025] Helvipad has been accepted to CVPR 2025! ππ
Dataset Structure
The dataset is organized into training, validation and testing subsets with the following structure:
helvipad/
βββ train/
β βββ depth_maps # Depth maps generated from LiDAR data
β βββ depth_maps_augmented # Augmented depth maps using depth completion
β βββ disparity_maps_corrected # Corrected disparity maps computed from depth maps
β βββ disparity_maps # /!\ Legacy version of disparity maps computed from depth maps
β βββ disparity_maps_augmented_corrected # Corrected augmented disparity maps using depth completion
β βββ disparity_maps_augmented # /!\ Augmented disparity maps using depth completion
β βββ images_top # Top-camera RGB images
β βββ images_bottom # Bottom-camera RGB images
β βββ LiDAR_pcd # Original LiDAR point cloud data
βββ val/
β βββ depth_maps # Depth maps generated from LiDAR data
β βββ depth_maps_augmented # Augmented depth maps using depth completion
β βββ disparity_maps_corrected # Corrected disparity maps computed from depth maps
β βββ disparity_maps # /!\ Legacy version of disparity maps computed from depth maps
β βββ disparity_maps_augmented_corrected # Corrected augmented disparity maps using depth completion
β βββ disparity_maps_augmented # /!\ Augmented disparity maps using depth completion
β βββ images_top # Top-camera RGB images
β βββ images_bottom # Bottom-camera RGB images
β βββ LiDAR_pcd # Original LiDAR point cloud data
βββ test/
β βββ depth_maps # Depth maps generated from LiDAR data
β βββ depth_maps_augmented # Augmented depth maps using depth completion
β βββ disparity_maps_corrected # Corrected disparity maps computed from depth maps
β βββ disparity_maps # /!\ Legacy version of disparity maps computed from depth maps
β βββ disparity_maps_augmented_corrected # Corrected augmented disparity maps using depth completion
β βββ disparity_maps_augmented # /!\ Augmented disparity maps using depth completion
β βββ images_top # Top-camera RGB images
β βββ images_bottom # Bottom-camera RGB images
β βββ LiDAR_pcd # Original LiDAR point cloud data
The dataset repository also includes:
helvipad_utils.py: utility functions for reading depth and disparity maps, converting disparity to depth, and handling disparity values in pixels and degrees;calibration.json: intrinsic and extrinsic calibration parameters for the stereo cameras and LiDAR sensor.
Benchmark
We evaluate the performance of multiple state-of-the-art and popular stereo matching methods, both for standard and 360Β° images. All models are trained on a single NVIDIA A100 GPU with the largest possible batch size to ensure comparable use of computational resources.
| Method | Stereo Setting | Disp-MAE (Β°) | Disp-RMSE (Β°) | Disp-MARE | Depth-MAE (m) | Depth-RMSE (m) | Depth-MARE | Depth-LRCE (m) |
|---|---|---|---|---|---|---|---|---|
| PSMNet | conventional | 0.286 | 0.496 | 0.248 | 2.509 | 5.673 | 0.176 | 1.809 |
| 360SD-Net | omnidirectional | 0.224 | 0.419 | 0.191 | 2.122 | 5.077 | 0.152 | 0.904 |
| IGEV-Stereo | conventional | 0.225 | 0.423 | 0.172 | 1.860 | 4.447 | 0.146 | 1.203 |
| 360-IGEV-Stereo | omnidirectional | 0.188 | 0.404 | 0.146 | 1.720 | 4.297 | 0.130 | 0.388 |
Note: These benchmark results were obtained using the legacy disparity maps (the ones originally released and used in the Helvipad and DFIOmnistereo papers). Updated corrected disparity maps are now available in the dataset repo; results may vary slightly when retraining with the corrected maps.
Project Page
For more information, visualizations, and updates, visit the project page.
License
This dataset is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License.
Acknowledgments
This work was supported by the EPFL Center for Imaging through a Collaborative Imaging Grant. We thank the VITA lab members for their valuable feedback, which helped to enhance the quality of this manuscript. We also express our gratitude to Dr. Simone Schaub-Meyer and Oliver Hahn for their insightful advice during the project's final stages.
Citation
If you use the Helvipad dataset in your research, please cite our paper:
@inproceedings{zayene2025helvipad,
author = {Zayene, Mehdi and Endres, Jannik and Havolli, Albias and Corbière, Charles and Cherkaoui, Salim and Ben Ahmed Kontouli, Alexandre and Alahi, Alexandre},
title = {Helvipad: A Real-World Dataset for Omnidirectional Stereo Depth Estimation},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2025}
}
Social Proof
AI Summary: Based on Hugging Face metadata. Not a recommendation.
π‘οΈ Dataset Transparency Report
Verified data manifest for traceability and transparency.
π Identity & Source
- id
- hf-dataset--chcorbi--helvipad
- slug
- chcorbi--helvipad
- source
- huggingface
- author
- chcorbi
- license
- CC0-1.0
- tags
- task_categories:depth-estimation, source_datasets:original, license:cc0-1.0, size_categories:10k
βοΈ Technical Specs
- architecture
- null
- params billions
- null
- context length
- null
- pipeline tag
π Engagement & Metrics
- downloads
- 72,679
- stars
- 10
- forks
- 0
Free2AITools Constitutional Data Pipeline: Curated disclosure mode active. (V15.x Standard)