ABCDEFGHIJKLMNOPQRSTUVWXYZAAABACADAEAFAG
1
Latest update:
🤖 = latest additions
July 23, 2025
2
Note: drop-down filters only work when open in Google SpreadsheetsNumber of entries: 15
3
Physical Objects and Artifacts
4
NameImageDescription
Data
Data Types
Camera Views
Robot Hardware
Relevant Applications
Relevant Tasks
Corresponding Physical
(see repository linked above)
ObjectsLink(s)
License
Citation
Year (Initial Release)
5
🤖 2BY2 Dataset
2BY2 is a large-scale annotated dataset for daily pairwise objects assembly, covering 18 fine-grained tasks that reflect real-life scenarios, such as plugging into sockets, arranging flowers in vases, and inserting bread into toasters. 2BY2 dataset includes 1,034 instances and 517 pairwise objects with pose and symmetry annotations, requiring approaches that align geometric shapes while accounting for functional and spatial relationships between objects.Sim3D object model meshes, Point clouds, MetadataN/ANoneAssistive Robotics, Commercial/Retail, Service/DomesticAssembly​517https://tea-lab.github.io/TwoByTwo/

https://github.com/TEA-Lab/TwoByTWo
MITQi, Yu, Yuanchen Ju, Tianming Wei, Chi Chu, Lawson LS Wong, and Huazhe Xu. "Two by two: Learning multi-task pairwise objects assembly for generalizable robot manipulation." In Proceedings of the Computer Vision and Pattern Recognition Conference, pp. 17383-17393. 2025.2025
6
Evolved Grasping Analysis Dataset (EGAD)
EGAD is a dataset of over 2000 geometrically diverse object meshes created specifically with robotic grasping and manipulation in mind. We used evoluationary algorithms to create a set of objects which uniformly span the object space of simple to complex, and easy to difficult to grasp, with a focus on geometric diversity. The objects are all easily 3D-printable, making 1:1 sim-to-real transfer possible. We specify an evaluation set of 49 diverse objects with a gradient of complexity and difficulty which can be used to evaluate robotic grasping systems in the real world.Sim3D object model meshesN/ANoneApplication AgnosticTask Agnostic​2,331https://dougsm.github.io/egad/ BSD 3-ClauseMorrison, Douglas, Peter Corke, and Jürgen Leitner. "Egad! an evolved grasping analysis dataset for diversity and reproducibility in robotic manipulation." IEEE Robotics and Automation Letters 5, no. 3 (2020): 4368-4375.2020
7
Functional Manipulation Benchmark (FMB)
Our dataset consists of objects in diverse appearance and geometry. It requires multi-stage and multi-modal fine motor skills to successfully assemble the pegs onto a unfixed board in a randomized scene. We collected a total of 22,550 trajectories across two different tasks on a Franka Panda arm. We record the trajectories from 2 global views and 2 wrist views. Each view contains both RGB and depth map. Two datasets included: Single-Object Multi-Stage Manipulation Task Full Dataset and Multi-Object Multi-Stage Manipulation Task with Assembly 1, 2, and 3.RealRGB images, Depth imagesExternal, WristFranka Emika PandaManufacturingAssemblyFunctional Manipulation Benchmark (FMB)54https://functional-manipulation-benchmark.github.io/dataset/index.html CC BY 4.0Luo, Jianlan, Charles Xu, Fangchen Liu, Liam Tan, Zipeng Lin, Jeffrey Wu, Pieter Abbeel, and Sergey Levine. "Fmb: a functional manipulation benchmark for generalizable robotic learning." The International Journal of Robotics Research (2023): 02783649241276017.2023
8
Household Cloth Model Set
The object set consists on a collection of household cloth objects that can be found in any house. Objects of all the categories (kitchen, dining room, bedroom, bathroom) are included. It spans a wide variety of sizes and yarns which allow to study many different types of manipulation skills and several cloth manipulation tasks. Repeated objects and different textiles but with same size for piling.RealRGB images, RGB-D images, 3D object model meshes, Microscopic imagesExternalNoneAssistive Robotics, Commercial/Retail, Service/DomesticCloth Folding, Deformable Object ManipulationHousehold Cloth Object Set15https://www.iri.upc.edu/groups/perception/ClothObjectSet/ ​Garcia-Camacho, Irene, Júlia Borràs, Berk Calli, Adam Norton, and Guillem Alenyà. "Household cloth object set: Fostering benchmarking in deformable object manipulation." IEEE Robotics and Automation Letters 7, no. 3 (2022): 5866-5873.2022
9
KIT Object Models
The system allows 2D image and 3D geometric data of everyday objects be obtained semi-automatically. The calibration provided allows 2D data to be related to 3D data. Through the use of high-quality sensors, high-accuracy data is achieved. So far over 100 objects have been digitized using this system and the data has been successfully used in several international research projects. All of the models are freely available on the web via a front-end that allows preview and filtering of the data.SimRGB images, 3D object model meshes, Calibration infoExternalNoneAssistive Robotics, Commercial/Retail, Service/DomesticPick-and-Place, Grasping, Picking in Clutter, Bin Picking, Shelf Picking​145https://archive.iar.kit.edu/Projects/ObjectModelsWebUI/ ​Kasper, Alexander, Zhixing Xue, and Rüdiger Dillmann. "The kit object models database: An object model database for object recognition, localization and manipulation in service robotics." The International Journal of Robotics Research 31, no. 8 (2012): 927-934.2012
10
ModelNet
The goal of the Princeton ModelNet project is to provide researchers in computer vision, computer graphics, robotics and cognitive science, with a comprehensive clean collection of 3D CAD models for objects. To build the core of the dataset, we compiled a list of the most common object categories in the world, using the statistics obtained from the SUN database. Once we established a vocabulary for objects, we collected 3D CAD models belonging to each object category using online search engines by querying for each object category term.Sim3D object model meshes, OrientationN/ANoneAssistive Robotics, Commercial/Retail, Service/DomesticTask Agnostic​127,915https://modelnet.cs.princeton.edu/ ​Wu, Zhirong, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaoou Tang, and Jianxiong Xiao. "3d shapenets: A deep representation for volumetric shapes." In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1912-1920. 2015.2015
11
NIST Manufacturing Objects and Assemblies Dataset (MOAD)
This release (v1) includes the following data from cameras positioned at five different angles captured in 5° rotational increments for NIST-ATB 1, 2, 3, 4 as well as a competition practice board. There are individual scans of each subcomponent for each board as well as holistic scans of each board in a completed and empty state: High resolution RGB images (Collected via Canon Rebel S3 DSLR cameras); Colored point clouds (transformed so all point clouds for an object align) (Collected via RealSense D455 cameras); Metadata of the scan (object, date, LUX, camera focal length, depth camera transform matrices).RealRGB images, Point clouds, Calibration info, MetadataExternalNoneManufacturingAssembly, Deformable Object ManipulationNIST Assembly Task Boards (ATB)98https://www.robot-manipulation.org/nist-moad ​2023
12
Objaverse 1.0
Objaverse is a Massive Dataset with 800K+ Annotated 3D Objects.Sim3D object model meshesN/ANoneApplication AgnosticTask Agnostic​800,000https://colab.research.google.com/drive/1ZLA4QufsiI_RuNlamKqV7D7mn40FbWoY

https://huggingface.co/datasets/allenai/objaverse

https://objaverse.allenai.org/objaverse-1.0
ODC-By v1.0, CC BY 4.0, CC BY-NC 4.0, CC-BY-NC-SA 4.0, CC-BY-SA 4.0, CC0 1.0Deitke, Matt, Dustin Schwenk, Jordi Salvador, Luca Weihs, Oscar Michel, Eli VanderBilt, Ludwig Schmidt, Kiana Ehsani, Aniruddha Kembhavi, and Ali Farhadi. "Objaverse: A universe of annotated 3d objects." In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 13142-13153. 2023.2023
13
Objaverse-XL
Objaverse-XL is an open dataset of over 10 million 3D objects!Sim3D object model meshesN/ANoneApplication AgnosticTask Agnostic​10,000,000https://github.com/allenai/objaverse-xl

https://huggingface.co/datasets/allenai/objaverse-xl

https://objaverse.allenai.org/
Apache 2.0, ODC-By v1.0Deitke, Matt, Ruoshi Liu, Matthew Wallingford, Huong Ngo, Oscar Michel, Aditya Kusupati, Alan Fan et al. "Objaverse-xl: A universe of 10m+ 3d objects." Advances in Neural Information Processing Systems 36 (2023): 35799-35813.2023
14
PartNet
PartNet is a consistent, large-scale dataset of 3D objects annotated with fine-grained, instance-level, and hierarchical 3D part information. Our dataset consists of 573,585 part instances over 26,671 3D models covering 24 object categories. This dataset enables and serves as a catalyst for many tasks such as shape analysis, dynamic 3D scene modeling and simulation, affordance analysis, and others.Sim3D object model meshes, Semantic segmentation, Instance segmentationN/ANoneAssistive Robotics, Commercial/Retail, Service/DomesticTask Agnostic​26,671https://partnet.cs.stanford.edu/ MITMo, Kaichun, Shilin Zhu, Angel X. Chang, Li Yi, Subarna Tripathi, Leonidas J. Guibas, and Hao Su. "Partnet: A large-scale benchmark for fine-grained and hierarchical part-level 3d object understanding." In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 909-918. 2019.2019
15
Princeton Shape Benchmark
The Princeton Shape dataset provides a repository of 3D models and software tools for evaluating shape-based retrieval and analysis algorithms. The motivation is to promote the use of standardized data sets and evaluation methods for research in matching, classification, clustering, and recognition of 3D models. Researchers are encouraged to use these resources to produce comparisons of competing algorithms in future publications. There are 1,814 models in total.Sim3D object model meshesN/ANoneApplication AgnosticTask Agnostic​1,814https://shape.cs.princeton.edu/benchmark/ ​Shilane, Philip, Patrick Min, Michael Kazhdan, and Thomas Funkhouser. "The princeton shape benchmark." In Proceedings Shape Modeling Applications, 2004., pp. 167-178. IEEE, 2004.2004
16
Scanned Objects by Google Research
We present Google Scanned Objects, an open-source collection of over one thousand 3D-scanned household items released under a Creative Commons license; these models are preprocessed for use in Ignition Gazebo and the Bullet simulation platforms, but are easily adaptable to other simulators.Real3D object model meshes, MetadataN/ANoneApplication AgnosticTask Agnostic​1,030https://research.google/blog/scanned-objects-by-google-research-a-dataset-of-3d-scanned-common-household-items/

https://goo.gle/scanned-objects
CC BY 4.0Downs, Laura, Anthony Francis, Nate Koenig, Brandon Kinman, Ryan Hickman, Krista Reymann, Thomas B. McHugh, and Vincent Vanhoucke. "Google scanned objects: A high-quality dataset of 3d scanned household items." In 2022 International Conference on Robotics and Automation (ICRA), pp. 2553-2560. IEEE, 2022.2022
17
ShapeNetSem
ShapeNetSem is a subset of ShapeNet richly annotated with physical attributes, which we release for the benefit of the research community. There are several pieces of model data that are available: OBJ format 3D mesh files with accompanying MTL files; texture files used by above 3D mesh representations; COLLADA (DAE) format 3D mesh files; Binary voxelizations of model surfaces in binvox format; Filled-in binary voxelizations of models in binvox format; Pre-rendered screenshots of each model from 6 canonical orientations (front, back, left, right, bottom, top), and another 6 "turn table" positions around the model.SimRGB images, 3D object model meshes, MetadataN/A, ExternalNoneApplication AgnosticTask Agnostic​220,000https://dagshub.com/Rutam21/ShapeNetSem-Dataset_of_3D_Shapes ​Savva, Manolis, Angel X. Chang, and Pat Hanrahan. "Semantically-enriched 3D models for common-sense knowledge." In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 24-31. 2015.2015
18
YCB Model Set
YCB Object and Model Set is designed for facilitating benchmarking in robotic manipulation. The set consists of objects of daily life with different shapes, sizes, textures, weight and rigidity, as well as some widely used manipulation tests. For each object, we provide: 600 RGBD images; 600 high-resolution RGB images; Segmentation masks for each image; Calibration information for each image; Texture-mapped 3D mesh models.RealRGB images, RGB-D images, Segmentation masks, 3D object model meshes, Calibration infoExternalNoneAssistive Robotics, Commercial/Retail, Service/DomesticPick-and-Place, Grasping, Picking in Clutter, Bin Picking, Shelf PickingYCB Object Set77https://www.ycbbenchmarks.com/object-models/ ​Calli, Berk, Arjun Singh, Aaron Walsman, Siddhartha Srinivasa, Pieter Abbeel, and Aaron M. Dollar. "The ycb object and model set: Towards common benchmarks for manipulation research." In 2015 international conference on advanced robotics (ICAR), pp. 510-517. IEEE, 2015.2015
19
YCB-Video Dataset
We contribute a large scale video dataset for 6D object pose estimation named the YCB-Video dataset. Our dataset provides accurate 6D poses of 21 objects from the YCB dataset observed in 92 videos with 133,827 frames.RealRGB images, 6D posesExternalNoneAssistive Robotics, Commercial/Retail, Service/DomesticPick-and-Place, Grasping, Picking in Clutter, Bin Picking, Shelf PickingYCB Object Set21https://rse-lab.cs.washington.edu/projects/posecnn/ ​Xiang, Yu, Tanner Schmidt, Venkatraman Narayanan, and Dieter Fox. "Posecnn: A convolutional neural network for 6d object pose estimation in cluttered scenes." arXiv preprint arXiv:1711.00199 (2017).2017