A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | AA | AB | AC | AD | AE | ||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | For all data release details, models, Colabs etc, please check out our Github Repo | |||||||||||||||||||||||||||||||
2 | To filter the datasets based on attributes of your choice: | |||||||||||||||||||||||||||||||
3 | (1) select the columns you want to filter by (select row 15 and below) | |||||||||||||||||||||||||||||||
4 | (2) select "Data" --> "Filter views" --> "Create New Filter View" (you can also use one of the views we prepared) | |||||||||||||||||||||||||||||||
5 | (3) click the little filter symbol next to the column you want to filter by and choose your filtering condition (you can check the example filter views if you're unsure) | |||||||||||||||||||||||||||||||
6 | (4) once you selected your filters and are happy with the remaining datasets, copy the list of dataset names below (it auto-updates to reflect the remaining datasets) and paste it into the code from the Dataset Colab (see here) | |||||||||||||||||||||||||||||||
7 | (5) for convenience, we also provide a correponsing list of citations that you can directly copy into your bib file and latex document to appropriately credit the used datasets | |||||||||||||||||||||||||||||||
8 | ||||||||||||||||||||||||||||||||
9 | # Total Episodes: | 2,419,193 | ||||||||||||||||||||||||||||||
10 | Current Download Size (GB): | 8964.94 | ||||||||||||||||||||||||||||||
11 | Dataset Download List: | ['fractal20220817_data', 'kuka', 'bridge', 'taco_play', 'jaco_play', 'berkeley_cable_routing', 'roboturk', 'nyu_door_opening_surprising_effectiveness', 'viola', 'berkeley_autolab_ur5', 'toto', 'language_table', 'columbia_cairlab_pusht_real', 'stanford_kuka_multimodal_dataset_converted_externally_to_rlds', 'nyu_rot_dataset_converted_externally_to_rlds', 'stanford_hydra_dataset_converted_externally_to_rlds', 'austin_buds_dataset_converted_externally_to_rlds', 'nyu_franka_play_dataset_converted_externally_to_rlds', 'maniskill_dataset_converted_externally_to_rlds', 'furniture_bench_dataset_converted_externally_to_rlds', 'cmu_franka_exploration_dataset_converted_externally_to_rlds', 'ucsd_kitchen_dataset_converted_externally_to_rlds', 'ucsd_pick_and_place_dataset_converted_externally_to_rlds', 'austin_sailor_dataset_converted_externally_to_rlds', 'austin_sirius_dataset_converted_externally_to_rlds', 'bc_z', 'usc_cloth_sim_converted_externally_to_rlds', 'utokyo_pr2_opening_fridge_converted_externally_to_rlds', 'utokyo_pr2_tabletop_manipulation_converted_externally_to_rlds', 'utokyo_saytap_converted_externally_to_rlds', 'utokyo_xarm_pick_and_place_converted_externally_to_rlds', 'utokyo_xarm_bimanual_converted_externally_to_rlds', 'robo_net', 'berkeley_mvp_converted_externally_to_rlds', 'berkeley_rpt_converted_externally_to_rlds', 'kaist_nonprehensile_converted_externally_to_rlds', 'stanford_mask_vit_converted_externally_to_rlds', 'tokyo_u_lsmo_converted_externally_to_rlds', 'dlr_sara_pour_converted_externally_to_rlds', 'dlr_sara_grid_clamp_converted_externally_to_rlds', 'dlr_edan_shared_control_converted_externally_to_rlds', 'asu_table_top_converted_externally_to_rlds', 'stanford_robocook_converted_externally_to_rlds', 'eth_agent_affordances', 'imperialcollege_sawyer_wrist_cam', 'iamlab_cmu_pickup_insert_converted_externally_to_rlds', 'qut_dexterous_manipulation', 'uiuc_d3field', 'utaustin_mutex', 'berkeley_fanuc_manipulation', 'cmu_playing_with_food', 'cmu_play_fusion', 'cmu_stretch', 'berkeley_gnm_recon', 'berkeley_gnm_cory_hall', 'berkeley_gnm_sac_son', 'robot_vqa', 'droid', 'conq_hose_manipulation', 'dobbe', 'fmb', 'io_ai_tech', 'mimic_play', 'aloha_mobile', 'robo_set', 'tidybot', 'vima_converted_externally_to_rlds', 'spoc', 'plex_robosuite'] | ||||||||||||||||||||||||||||||
12 | Citation List (copy into bib file): | @article{brohan2022rt, title={Rt-1: Robotics transformer for real-world control at scale}, author={Brohan, Anthony and Brown, Noah and Carbajal, Justice and Chebotar, Yevgen and Dabis, Joseph and Finn, Chelsea and Gopalakrishnan, Keerthana and Hausman, Karol and Herzog, Alex and Hsu, Jasmine and others}, journal={arXiv preprint arXiv:2212.06817}, year={2022} } @article{kalashnikov2018qt, title={Qt-opt: Scalable deep reinforcement learning for vision-based robotic manipulation}, author={Kalashnikov, Dmitry and Irpan, Alex and Pastor, Peter and Ibarz, Julian and Herzog, Alexander and Jang, Eric and Quillen, Deirdre and Holly, Ethan and Kalakrishnan, Mrinal and Vanhoucke, Vincent and others}, journal={arXiv preprint arXiv:1806.10293}, year={2018} } @inproceedings{walke2023bridgedata, title={BridgeData V2: A Dataset for Robot Learning at Scale}, author={Walke, Homer and Black, Kevin and Lee, Abraham and Kim, Moo Jin and Du, Max and Zheng, Chongyi and Zhao, Tony and Hansen-Estruch, Philippe and Vuong, Quan and He, Andre and Myers, Vivek and Fang, Kuan and Finn, Chelsea and Levine, Sergey}, booktitle={Conference on Robot Learning (CoRL)}, year={2023} } @inproceedings{rosete2022tacorl, author = {Erick Rosete-Beas and Oier Mees and Gabriel Kalweit and Joschka Boedecker and Wolfram Burgard}, title = {Latent Plans for Task Agnostic Offline Reinforcement Learning}, journal = {Proceedings of the 6th Conference on Robot Learning (CoRL)}, year = {2022} } @inproceedings{mees23hulc2, title={Grounding Language with Visual Affordances over Unstructured Data}, author={Oier Mees and Jessica Borja-Diaz and Wolfram Burgard}, booktitle = {Proceedings of the IEEE International Conference on Robotics and Automation (ICRA)}, year={2023}, address = {London, UK} } @software{dass2023jacoplay, author = {Dass, Shivin and Yapeter, Jullian and Zhang, Jesse and Zhang, Jiahui and Pertsch, Karl and Nikolaidis, Stefanos and Lim, Joseph J.}, title = {CLVR Jaco Play Dataset}, url = {https://github.com/clvrai/clvr_jaco_play_dataset}, version = {1.0.0}, year = {2023} } @article{luo2023multistage, author = {Jianlan Luo and Charles Xu and Xinyang Geng and Gilbert Feng and Kuan Fang and Liam Tan and Stefan Schaal and Sergey Levine}, title = {Multi-Stage Cable Routing through Hierarchical Imitation Learning}, journal = {arXiv pre-print}, year = {2023}, url = {https://arxiv.org/abs/2307.08927}, } @inproceedings{mandlekar2019scaling, title={Scaling robot supervision to hundreds of hours with roboturk: Robotic manipulation dataset through human reasoning and dexterity}, author={Mandlekar, Ajay and Booher, Jonathan and Spero, Max and Tung, Albert and Gupta, Anchit and Zhu, Yuke and Garg, Animesh and Savarese, Silvio and Fei-Fei, Li}, booktitle={2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, pages={1048--1055}, year={2019}, organization={IEEE} } @misc{pari2021surprising, title={The Surprising Effectiveness of Representation Learning for Visual Imitation}, author={Jyothish Pari and Nur Muhammad Shafiullah and Sridhar Pandian Arunachalam and Lerrel Pinto}, year={2021}, eprint={2112.01511}, archivePrefix={arXiv}, primaryClass={cs.RO} } @article{zhu2022viola, title={VIOLA: Imitation Learning for Vision-Based Manipulation with Object Proposal Priors}, author={Zhu, Yifeng and Joshi, Abhishek and Stone, Peter and Zhu, Yuke}, journal={6th Annual Conference on Robot Learning (CoRL)}, year={2022} } @misc{BerkeleyUR5Website, title = {Berkeley {UR5} Demonstration Dataset}, author = {Lawrence Yunliang Chen and Simeon Adebola and Ken Goldberg}, howpublished = {https://sites.google.com/view/berkeley-ur5/home}, } @inproceedings{zhou2023train, author={Zhou, Gaoyue and Dean, Victoria and Srirama, Mohan Kumar and Rajeswaran, Aravind and Pari, Jyothish and Hatch, Kyle and Jain, Aryan and Yu, Tianhe and Abbeel, Pieter and Pinto, Lerrel and Finn, Chelsea and Gupta, Abhinav}, booktitle={2023 IEEE International Conference on Robotics and Automation (ICRA)}, title={Train Offline, Test Online: A Real Robot Learning Benchmark}, year={2023}, } @article{lynch2023interactive, title={Interactive language: Talking to robots in real time}, author={Lynch, Corey and Wahid, Ayzaan and Tompson, Jonathan and Ding, Tianli and Betker, James and Baruch, Robert and Armstrong, Travis and Florence, Pete}, journal={IEEE Robotics and Automation Letters}, year={2023}, publisher={IEEE} } @inproceedings{chi2023diffusionpolicy, title={Diffusion Policy: Visuomotor Policy Learning via Action Diffusion}, author={Chi, Cheng and Feng, Siyuan and Du, Yilun and Xu, Zhenjia and Cousineau, Eric and Burchfiel, Benjamin and Song, Shuran}, booktitle={Proceedings of Robotics: Science and Systems (RSS)}, year={2023} } @inproceedings{lee2019icra, title={Making sense of vision and touch: Self-supervised learning of multimodal representations for contact-rich tasks}, author={Lee, Michelle A and Zhu, Yuke and Srinivasan, Krishnan and Shah, Parth and Savarese, Silvio and Fei-Fei, Li and Garg, Animesh and Bohg, Jeannette}, booktitle={2019 IEEE International Conference on Robotics and Automation (ICRA)}, year={2019}, url={https://arxiv.org/abs/1810.10191} } @inproceedings{haldar2023watch, title={Watch and match: Supercharging imitation with regularized optimal transport}, author={Haldar, Siddhant and Mathur, Vaibhav and Yarats, Denis and Pinto, Lerrel}, booktitle={Conference on Robot Learning}, pages={32--43}, year={2023}, organization={PMLR} } @article{belkhale2023hydra, title={HYDRA: Hybrid Robot Actions for Imitation Learning}, author={Belkhale, Suneel and Cui, Yuchen and Sadigh, Dorsa}, journal={arxiv}, year={2023} } @article{zhu2022bottom, title={Bottom-Up Skill Discovery From Unsegmented Demonstrations for Long-Horizon Robot Manipulation}, author={Zhu, Yifeng and Stone, Peter and Zhu, Yuke}, journal={IEEE Robotics and Automation Letters}, volume={7}, number={2}, pages={4126--4133}, year={2022}, publisher={IEEE} } @article{cui2022play, title = {From Play to Policy: Conditional Behavior Generation from Uncurated Robot Data}, author = {Cui, Zichen Jeff and Wang, Yibin and Shafiullah, Nur Muhammad Mahi and Pinto, Lerrel}, journal = {arXiv preprint arXiv:2210.10047}, year = {2022} } @inproceedings{gu2023maniskill2, title={ManiSkill2: A Unified Benchmark for Generalizable Manipulation Skills}, author={Gu, Jiayuan and Xiang, Fanbo and Li, Xuanlin and Ling, Zhan and Liu, Xiqiang and Mu, Tongzhou and Tang, Yihe and Tao, Stone and Wei, Xinyue and Yao, Yunchao and Yuan, Xiaodi and Xie, Pengwei and Huang, Zhiao and Chen, Rui and Su, Hao}, booktitle={International Conference on Learning Representations}, year={2023} } @inproceedings{heo2023furniturebench, title={FurnitureBench: Reproducible Real-World Benchmark for Long-Horizon Complex Manipulation}, author={Minho Heo and Youngwoon Lee and Doohyun Lee and Joseph J. Lim}, booktitle={Robotics: Science and Systems}, year={2023} } @inproceedings{mendonca2023structured, title={Structured World Models from Human Videos}, author={Mendonca, Russell and Bahl, Shikhar and Pathak, Deepak}, journal={RSS}, year={2023} } @ARTICLE{ucsd_kitchens, author = {Ge Yan, Kris Wu, and Xiaolong Wang}, title = {{ucsd kitchens Dataset}}, year = {2023}, month = {August} } @preprint{Feng2023Finetuning, title={Finetuning Offline World Models in the Real World}, author={Yunhai Feng, Nicklas Hansen, Ziyan Xiong, Chandramouli Rajagopalan, Xiaolong Wang}, year={2023} } @inproceedings{nasiriany2022sailor, title={Learning and Retrieval from Prior Data for Skill-based Imitation Learning}, author={Soroush Nasiriany and Tian Gao and Ajay Mandlekar and Yuke Zhu}, booktitle={Conference on Robot Learning (CoRL)}, year={2022} } @inproceedings{liu2022robot, title = {Robot Learning on the Job: Human-in-the-Loop Autonomy and Learning During Deployment}, author = {Huihan Liu and Soroush Nasiriany and Lance Zhang and Zhiyao Bao and Yuke Zhu}, booktitle = {Robotics: Science and Systems (RSS)}, year = {2023} } @inproceedings{jang2021bc, title={{BC}-Z: Zero-Shot Task Generalization with Robotic Imitation Learning}, author={Eric Jang and Alex Irpan and Mohi Khansari and Daniel Kappler and Frederik Ebert and Corey Lynch and Sergey Levine and Chelsea Finn}, booktitle={5th Annual Conference on Robot Learning}, year={2021}, url={https://openreview.net/forum?id=8kbp23tSGYv}} @article{salhotra2022dmfd, author={Salhotra, Gautam and Liu, I-Chun Arthur and Dominguez-Kuhne, Marcus and Sukhatme, Gaurav S.}, journal={IEEE Robotics and Automation Letters}, title={Learning Deformable Object Manipulation From Expert Demonstrations}, year={2022}, volume={7}, number={4}, pages={8775-8782}, doi={10.1109/LRA.2022.3187843} } @misc{oh2023pr2utokyodatasets, author={Jihoon Oh and Naoaki Kanazawa and Kento Kawaharazuka}, title={X-Embodiment U-Tokyo PR2 Datasets}, year={2023}, url={https://github.com/ojh6404/rlds_dataset_builder}, } @misc{oh2023pr2utokyodatasets, author={Jihoon Oh and Naoaki Kanazawa and Kento Kawaharazuka}, title={X-Embodiment U-Tokyo PR2 Datasets}, year={2023}, url={https://github.com/ojh6404/rlds_dataset_builder}, } @article{saytap2023, author = {Yujin Tang and Wenhao Yu and Jie Tan and Heiga Zen and Aleksandra Faust and Tatsuya Harada}, title = {SayTap: Language to Quadrupedal Locomotion}, eprint = {arXiv:2306.07580}, url = {https://saytap.github.io}, note = "{https://saytap.github.io}", year = {2023} } @misc{matsushima2023weblab, title={Weblab xArm Dataset}, author={Tatsuya Matsushima and Hiroki Furuta and Yusuke Iwasawa and Yutaka Matsuo}, year={2023}, } @misc{matsushima2023weblab, title={Weblab xArm Dataset}, author={Tatsuya Matsushima and Hiroki Furuta and Yusuke Iwasawa and Yutaka Matsuo}, year={2023}, } @inproceedings{dasari2019robonet, title={RoboNet: Large-Scale Multi-Robot Learning}, author={Sudeep Dasari and Frederik Ebert and Stephen Tian and Suraj Nair and Bernadette Bucher and Karl Schmeckpeper and Siddharth Singh and Sergey Levine and Chelsea Finn}, year={2019}, eprint={1910.11215}, archivePrefix={arXiv}, primaryClass={cs.RO}, booktitle={CoRL 2019: Volume 100 Proceedings of Machine Learning Research} } @InProceedings{Radosavovic2022, title = {Real-World Robot Learning with Masked Visual Pre-training}, author = {Ilija Radosavovic and Tete Xiao and Stephen James and Pieter Abbeel and Jitendra Malik and Trevor Darrell}, booktitle = {CoRL}, year = {2022} } @article{Radosavovic2023, title={Robot Learning with Sensorimotor Pre-training}, author={Ilija Radosavovic and Baifeng Shi and Letian Fu and Ken Goldberg and Trevor Darrell and Jitendra Malik}, year={2023}, journal={arXiv:2306.10007} } @article{kimpre, title={Pre-and post-contact policy decomposition for non-prehensile manipulation with zero-shot sim-to-real transfer}, author={Kim, Minchan and Han, Junhyek and Kim, Jaehyung and Kim, Beomjoon}, booktitle={2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, year={2023}, organization={IEEE} } @misc{rana2023dgrasp, author = {Krishan Rana, Ben Burgess-Limerick, Jad Abou-Chakra, Niko S{\"u}nderhauf}, title = {DGrasp: A Large Scale Dataset for Dynamic Grasping of Moving Objects.}, year = {2023}} @inproceedings{gupta2022maskvit, title={MaskViT: Masked Visual Pre-Training for Video Prediction}, author={Agrim Gupta and Stephen Tian and Yunzhi Zhang and Jiajun Wu and Roberto MartÃn-MartÃn and Li Fei-Fei}, booktitle={International Conference on Learning Representations}, year={2022} } @Article{Osa22, author = {Takayuki Osa}, journal = {The International Journal of Robotics Research}, title = {Motion Planning by Learning the Solution Manifold in Trajectory Optimization}, year = {2022}, number = {3}, pages = {291--311}, volume = {41}, } @inproceedings{padalkar2023guiding, title={Guiding Reinforcement Learning with Shared Control Templates}, author={Padalkar, Abhishek and Quere, Gabriel and Steinmetz, Franz and Raffin, Antonin and Nieuwenhuisen, Matthias and Silv{\'e}rio, Jo{\~a}o and Stulp, Freek}, booktitle={40th IEEE International Conference on Robotics and Automation, ICRA 2023}, year={2023}, organization={IEEE} } @article{padalkar2023guided, title={A guided reinforcement learning approach using shared control templates for learning manipulation skills in the real world}, author={Padalkar, Abhishek and Quere, Gabriel and Raffin, Antonin and Silv{\'e}rio, Jo{\~a}o and Stulp, Freek}, journal={Research square preprint rs-3289569/v1}, year={2023} } @inproceedings{vogel_edan_2020, title = {EDAN - an EMG-Controlled Daily Assistant to Help People with Physical Disabilities}, language = {en}, booktitle = {2020 {IEEE}/{RSJ} {International} {Conference} on {Intelligent} {Robots} and {Systems} ({IROS})}, author = {Vogel, Jörn and Hagengruber, Annette and Iskandar, Maged and Quere, Gabriel and Leipscher, Ulrike and Bustamante, Samuel and Dietrich, Alexander and Hoeppner, Hannes and Leidner, Daniel and Albu-Schäffer, Alin}, year = {2020} } @inproceedings{quere_shared_2020, address = {Paris, France}, title = {Shared {Control} {Templates} for {Assistive} {Robotics}}, language = {en}, booktitle = {2020 {IEEE} {International} {Conference} on {Robotics} and {Automation} ({ICRA})}, author = {Quere, Gabriel and Hagengruber, Annette and Iskandar, Maged and Bustamante, Samuel and Leidner, Daniel and Stulp, Freek and Vogel, Joern}, year = {2020}, pages = {7}, } @inproceedings{zhou2023modularity, title={Modularity through Attention: Efficient Training and Transfer of Language-Conditioned Policies for Robot Manipulation}, author={Zhou, Yifan and Sonawani, Shubham and Phielipp, Mariano and Stepputtis, Simon and Amor, Heni}, booktitle={Conference on Robot Learning}, pages={1684--1695}, year={2023}, organization={PMLR} } @article{zhou2023learning, title={Learning modular language-conditioned robot policies through attention}, author={Zhou, Yifan and Sonawani, Shubham and Phielipp, Mariano and Ben Amor, Heni and Stepputtis, Simon}, journal={Autonomous Robots}, pages={1--21}, year={2023}, publisher={Springer} } @article{shi2023robocook, title={RoboCook: Long-Horizon Elasto-Plastic Object Manipulation with Diverse Tools}, author={Shi, Haochen and Xu, Huazhe and Clarke, Samuel and Li, Yunzhu and Wu, Jiajun}, journal={arXiv preprint arXiv:2306.14447}, year={2023} } @inproceedings{schiavi2023learning, title={Learning agent-aware affordances for closed-loop interaction with articulated objects}, author={Schiavi, Giulio and Wulkop, Paula and Rizzi, Giuseppe and Ott, Lionel and Siegwart, Roland and Chung, Jen Jen}, booktitle={2023 IEEE International Conference on Robotics and Automation (ICRA)}, pages={5916--5922}, year={2023}, organization={IEEE} } @inproceedings{saxena2023multiresolution, title={Multi-Resolution Sensing for Real-Time Control with Vision-Language Models}, author={Saumya Saxena and Mohit Sharma and Oliver Kroemer}, booktitle={7th Annual Conference on Robot Learning}, year={2023}, url={https://openreview.net/forum?id=WuBv9-IGDUA} } @misc{ceola2023lhmanip, author = {Federico Ceola, Krishan Rana, Niko S{\"u}nderhauf}, title = {LHManip: A Dataset for Long Horizon Manipulation Tasks.}, year = {2023}} @article{guist2023robust, title={A Robust Open-source Tendon-driven Robot Arm for Learning Control of Dynamic Motions}, author={Guist, Simon and Schneider, Jan and Ma, Hao and Berenz, Vincent and Martus, Julian and Gr{\"u}ninger, Felix and M{\"u}hlebach, Michael and Fiene, Jonathan and Sch{\"o}lkopf, Bernhard and B{\"u}chler, Dieter}, journal={arXiv preprint arXiv:2307.02654}, year={2023} } @article{wang2023d3field, title={D^3Field: Dynamic 3D Descriptor Fields for Generalizable Robotic Manipulation}, author={Wang, Yixuan and Li, Zhuoran and Zhang, Mingtong and Driggs-Campbell, Katherine and Wu, Jiajun and Fei-Fei, Li and Li, Yunzhu}, journal={arXiv preprint arXiv:}, year={2023}, } @inproceedings{shah2023mutex, title={{MUTEX}: Learning Unified Policies from Multimodal Task Specifications}, author={Rutav Shah and Roberto Mart{\'\i}n-Mart{\'\i}n and Yuke Zhu}, booktitle={7th Annual Conference on Robot Learning}, year={2023}, url={https://openreview.net/forum?id=PwqiqaaEzJ} } @article{fanuc_manipulation2023, title={Fanuc Manipulation: A Dataset for Learning-based Manipulation with FANUC Mate 200iD Robot}, author={Zhu, Xinghao and Tian, Ran and Xu, Chenfeng and Ding, Mingyu and Zhan, Wei and Tomizuka, Masayoshi}, year={2023}, } @inproceedings{sawhney2021playing, title={Playing with food: Learning food item representations through interactive exploration}, author={Sawhney, Amrita and Lee, Steven and Zhang, Kevin and Veloso, Manuela and Kroemer, Oliver}, booktitle={Experimental Robotics: The 17th International Symposium}, pages={309--322}, year={2021}, organization={Springer} } @inproceedings{chen2023playfusion, title={PlayFusion: Skill Acquisition via Diffusion from Language-Annotated Play}, author={Chen, Lili and Bahl, Shikhar and Pathak, Deepak}, booktitle={CoRL}, year={2023} } @inproceedings{bahl2023affordances, title={Affordances from Human Videos as a Versatile Representation for Robotics}, author={Bahl, Shikhar and Mendonca, Russell and Chen, Lili and Jain, Unnat and Pathak, Deepak}, booktitle={CVPR}, year={2023} } @article{mendonca2023structured, title={Structured World Models from Human Videos}, author={Mendonca, Russell and Bahl, Shikhar and Pathak, Deepak}, journal={CoRL}, year={2023} } @inproceedings{shah2021rapid, title={{Rapid Exploration for Open-World Navigation with Latent Goal Models}}, author={Dhruv Shah and Benjamin Eysenbach and Nicholas Rhinehart and Sergey Levine}, booktitle={5th Annual Conference on Robot Learning }, year={2021}, url={https://openreview.net/forum?id=d_SWJhyKfVw} } @inproceedings{kahn2018self, title={Self-supervised deep reinforcement learning with generalized computation graphs for robot navigation}, author={Kahn, Gregory and Villaflor, Adam and Ding, Bosen and Abbeel, Pieter and Levine, Sergey}, booktitle={2018 IEEE international conference on robotics and automation (ICRA)}, pages={5129--5136}, year={2018}, organization={IEEE} } @article{hirose2023sacson, title={SACSoN: Scalable Autonomous Data Collection for Social Navigation}, author={Hirose, Noriaki and Shah, Dhruv and Sridhar, Ajay and Levine, Sergey}, journal={arXiv preprint arXiv:2306.01874}, year={2023} } @article{Zhao2023LearningFB, title={Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware}, author={Tony Zhao and Vikash Kumar and Sergey Levine and Chelsea Finn}, journal={RSS}, year={2023}, volume={abs/2304.13705}, url={https://arxiv.org/abs/2304.13705} } @article{khazatsky2024droid, title = {DROID: A Large-Scale In-The-Wild Robot Manipulation Dataset}, author = {Alexander Khazatsky and Karl Pertsch and Suraj Nair and Ashwin Balakrishna and Sudeep Dasari and Siddharth Karamcheti and Soroush Nasiriany and Mohan Kumar Srirama and Lawrence Yunliang Chen and Kirsty Ellis and Peter David Fagan and Joey Hejna and Masha Itkina and Marion Lepert and Yecheng Jason Ma and Patrick Tree Miller and Jimmy Wu and Suneel Belkhale and Shivin Dass and Huy Ha and Arhan Jain and Abraham Lee and Youngwoon Lee and Marius Memmel and Sungjae Park and Ilija Radosavovic and Kaiyuan Wang and Albert Zhan and Kevin Black and Cheng Chi and Kyle Beltran Hatch and Shan Lin and Jingpei Lu and Jean Mercat and Abdul Rehman and Pannag R Sanketi and Archit Sharma and Cody Simpson and Quan Vuong and Homer Rich Walke and Blake Wulfe and Ted Xiao and Jonathan Heewon Yang and Arefeh Yavary and Tony Z. Zhao and Christopher Agia and Rohan Baijal and Mateo Guaman Castro and Daphne Chen and Qiuyu Chen and Trinity Chung and Jaimyn Drake and Ethan Paul Foster and Jensen Gao and David Antonio Herrera and Minho Heo and Kyle Hsu and Jiaheng Hu and Donovon Jackson and Charlotte Le and Yunshuang Li and Kevin Lin and Roy Lin and Zehan Ma and Abhiram Maddukuri and Suvir Mirchandani and Daniel Morton and Tony Nguyen and Abigail O'Neill and Rosario Scalise and Derick Seale and Victor Son and Stephen Tian and Emi Tran and Andrew E. Wang and Yilin Wu and Annie Xie and Jingyun Yang and Patrick Yin and Yunchu Zhang and Osbert Bastani and Glen Berseth and Jeannette Bohg and Ken Goldberg and Abhinav Gupta and Abhishek Gupta and Dinesh Jayaraman and Joseph J Lim and Jitendra Malik and Roberto MartÃn-MartÃn and Subramanian Ramamoorthy and Dorsa Sadigh and Shuran Song and Jiajun Wu and Michael C. Yip and Yuke Zhu and Thomas Kollar and Sergey Levine and Chelsea Finn}, year = {2024}, } @misc{ConqHoseManipData, author={Peter Mitrano and Dmitry Berenson}, title={Conq Hose Manipulation Dataset, v1.15.0}, year={2024}, howpublished={https://sites.google.com/view/conq-hose-manipulation-dataset} } @misc{shafiullah2023dobbe, title={On Bringing Robots Home}, author={Nur Muhammad Mahi Shafiullah and Anant Rai and Haritheja Etukuru and Yiqian Liu and Ishan Misra and Soumith Chintala and Lerrel Pinto}, year={2023}, eprint={2311.16098}, archivePrefix={arXiv}, primaryClass={cs.RO} } @article{luo2024fmb, title={FMB: a Functional Manipulation Benchmark for Generalizable Robotic Learning}, author={Luo, Jianlan and Xu, Charles and Liu, Fangchen and Tan, Liam and Lin, Zipeng and Wu, Jeffrey and Abbeel, Pieter and Levine, Sergey}, journal={arXiv preprint arXiv:2401.08553}, year={2024} } @article{wang2023mimicplay, title={Mimicplay: Long-horizon imitation learning by watching human play}, author={Wang, Chen and Fan, Linxi and Sun, Jiankai and Zhang, Ruohan and Fei-Fei, Li and Xu, Danfei and Zhu, Yuke and Anandkumar, Anima}, journal={arXiv preprint arXiv:2302.12422}, year={2023} } @inproceedings{fu2024mobile, author = {Fu, Zipeng and Zhao, Tony Z. and Finn, Chelsea}, title = {Mobile ALOHA: Learning Bimanual Mobile Manipulation with Low-Cost Whole-Body Teleoperation}, booktitle = {arXiv}, year = {2024}, } @misc{bharadhwaj2023roboagent, title={RoboAgent: Generalization and Efficiency in Robot Manipulation via Semantic Augmentations and Action Chunking}, author={Homanga Bharadhwaj and Jay Vakil and Mohit Sharma and Abhinav Gupta and Shubham Tulsiani and Vikash Kumar}, year={2023}, eprint={2309.01918}, archivePrefix={arXiv}, primaryClass={cs.RO} } @article{wu2023tidybot, title = {TidyBot: Personalized Robot Assistance with Large Language Models}, author = {Wu, Jimmy and Antonova, Rika and Kan, Adam and Lepert, Marion and Zeng, Andy and Song, Shuran and Bohg, Jeannette and Rusinkiewicz, Szymon and Funkhouser, Thomas}, journal = {Autonomous Robots}, year = {2023} } @inproceedings{jiang2023vima, title = {VIMA: General Robot Manipulation with Multimodal Prompts}, author = {Yunfan Jiang and Agrim Gupta and Zichen Zhang and Guanzhi Wang and Yongqiang Dou and Yanjun Chen and Li Fei-Fei and Anima Anandkumar and Yuke Zhu and Linxi Fan}, booktitle = {Fortieth International Conference on Machine Learning}, year = {2023} } @article{spoc2023, author = {Kiana Ehsani, Tanmay Gupta, Rose Hendrix, Jordi Salvador, Luca Weihs, Kuo-Hao Zeng, Kunal Pratap Singh, Yejin Kim, Winson Han, Alvaro Herrasti, Ranjay Krishna, Dustin Schwenk, Eli VanderBilt, Aniruddha Kembhavi}, title = {Imitating Shortest Paths in Simulation Enables Effective Navigation and Manipulation in the Real World}, journal = {arXiv}, year = {2023}, eprint = {2312.02976}, } @inproceedings{thomas2023plex, title={PLEX: Making the Most of the Available Data for Robotic Manipulation Pretraining}, author={Garrett Thomas and Ching-An Cheng and Ricky Loynd and Felipe Vieira Frujeri and Vibhav Vineet and Mihai Jalobeanu and Andrey Kolobov}, booktitle={CoRL}, year={2023} } | ||||||||||||||||||||||||||||||
13 | Cite cmd (copy into Latex file): | \citep{brohan2022rt, kalashnikov2018qt, walke2023bridgedata, rosete2022tacorl, mees23hulc2, dass2023jacoplay, luo2023multistage, mandlekar2019scaling, pari2021surprising, zhu2022viola, BerkeleyUR5Website, zhou2023train, lynch2023interactive, chi2023diffusionpolicy, lee2019icra, haldar2023watch, belkhale2023hydra, zhu2022bottom, cui2022play, gu2023maniskill2, heo2023furniturebench, mendonca2023structured, ucsd_kitchens, Feng2023Finetuning, nasiriany2022sailor, liu2022robot, jang2021bc, salhotra2022dmfd, oh2023pr2utokyodatasets, oh2023pr2utokyodatasets, saytap2023, matsushima2023weblab, matsushima2023weblab, dasari2019robonet, Radosavovic2022, Radosavovic2023, kimpre, rana2023dgrasp, gupta2022maskvit, Osa22, padalkar2023guiding, padalkar2023guided, vogel_edan_2020, quere_shared_2020, zhou2023modularity, zhou2023learning, shi2023robocook, schiavi2023learning, saxena2023multiresolution, ceola2023lhmanip, guist2023robust, wang2023d3field, shah2023mutex, fanuc_manipulation2023, sawhney2021playing, chen2023playfusion, bahl2023affordances, mendonca2023structured, shah2021rapid, kahn2018self, hirose2023sacson, Zhao2023LearningFB, khazatsky2024droid, ConqHoseManipData, shafiullah2023dobbe, luo2024fmb, wang2023mimicplay, fu2024mobile, bharadhwaj2023roboagent, wu2023tidybot, jiang2023vima, spoc2023, thomas2023plex} | ||||||||||||||||||||||||||||||
14 | ||||||||||||||||||||||||||||||||
15 | Dataset | Robot | # Episodes | File Size (GB) | Robot Morphology | Gripper | Action Space | # RGB Cams | # Depth Cams | # Wrist Cams | Language Annotations | Data Collect Method | Has Suboptimal? | Has Camera Calibration? | Has Proprioception? | Scene Type | Control Frequency | Registered Dataset Name | Citation | Latex Reference | Description | |||||||||||
16 | v1.0 | RT-1 Robot Action | Google Robot | 73,499 | 111.06 | Mobile Manipulator | Default | EEF Position | 1 | 1 | 0 | Templated | Human VR | No | No | Yes | Table Top, Kitchen (also toy kitchen) | 3 | fractal20220817_data | @article{brohan2022rt, title={Rt-1: Robotics transformer for real-world control at scale}, author={Brohan, Anthony and Brown, Noah and Carbajal, Justice and Chebotar, Yevgen and Dabis, Joseph and Finn, Chelsea and Gopalakrishnan, Keerthana and Hausman, Karol and Herzog, Alex and Hsu, Jasmine and others}, journal={arXiv preprint arXiv:2212.06817}, year={2022} } | brohan2022rt | Robot picks, places and moves 17 objects from the google micro kitchens. | ||||||||||
17 | QT-Opt | Kuka iiwa | 580,392 | 778.02 | Single Arm | Default | EEF Position | 1 | 0 | 0 | None | Expert Policy | No | No | Yes | Table Top | 10 | kuka | @article{kalashnikov2018qt, title={Qt-opt: Scalable deep reinforcement learning for vision-based robotic manipulation}, author={Kalashnikov, Dmitry and Irpan, Alex and Pastor, Peter and Ibarz, Julian and Herzog, Alexander and Jang, Eric and Quillen, Deirdre and Holly, Ethan and Kalakrishnan, Mrinal and Vanhoucke, Vincent and others}, journal={arXiv preprint arXiv:1806.10293}, year={2018} } | kalashnikov2018qt | Kuka robot picking objects in a bin. | |||||||||||
18 | Berkeley Bridge | WidowX | 25,460 | 387.49 | Single Arm | Default | EEF Position | 4 | 1 | 1 | Natural | Human VR | Yes | No | Yes | Table Top, Kitchen (also toy kitchen), Other Household environments | 5 | bridge | @inproceedings{walke2023bridgedata, title={BridgeData V2: A Dataset for Robot Learning at Scale}, author={Walke, Homer and Black, Kevin and Lee, Abraham and Kim, Moo Jin and Du, Max and Zheng, Chongyi and Zhao, Tony and Hansen-Estruch, Philippe and Vuong, Quan and He, Andre and Myers, Vivek and Fang, Kuan and Finn, Chelsea and Levine, Sergey}, booktitle={Conference on Robot Learning (CoRL)}, year={2023} } | walke2023bridgedata | The robot interacts with household environments including kitchens, sinks, and tabletops. Skills include object rearrangement, sweeping, stacking, folding, and opening/closing doors and drawers. | |||||||||||
19 | Freiburg Franka Play | Franka | 3,242 | 47.77 | Single Arm | Custom 3D printed | EEF Position | 2 | 2 | 2 | Templated | Human VR | No | Yes | Yes | Table Top | 15 | taco_play | @inproceedings{rosete2022tacorl, author = {Erick Rosete-Beas and Oier Mees and Gabriel Kalweit and Joschka Boedecker and Wolfram Burgard}, title = {Latent Plans for Task Agnostic Offline Reinforcement Learning}, journal = {Proceedings of the 6th Conference on Robot Learning (CoRL)}, year = {2022} } @inproceedings{mees23hulc2, title={Grounding Language with Visual Affordances over Unstructured Data}, author={Oier Mees and Jessica Borja-Diaz and Wolfram Burgard}, booktitle = {Proceedings of the IEEE International Conference on Robotics and Automation (ICRA)}, year={2023}, address = {London, UK} } | rosete2022tacorl, mees23hulc2 | "The robot interacts with toy blocks, it pick and places them, stacks them, unstacks them, opens drawers, sliding doors and turrns on LED lights by pushing buttons." | |||||||||||
20 | USC Jaco Play | Jaco 2 | 976 | 9.24 | Single Arm | Default | EEF Position | 2 | 0 | 1 | Templated | Human VR | No | No | Yes | Table Top, Kitchen (also toy kitchen) | 10 | jaco_play | @software{dass2023jacoplay, author = {Dass, Shivin and Yapeter, Jullian and Zhang, Jesse and Zhang, Jiahui and Pertsch, Karl and Nikolaidis, Stefanos and Lim, Joseph J.}, title = {CLVR Jaco Play Dataset}, url = {https://github.com/clvrai/clvr_jaco_play_dataset}, version = {1.0.0}, year = {2023} } | dass2023jacoplay | The robot performs pick-place tasks in a tabletop toy kitchen environment. Some examples of the task include, "Pick up the orange fruit.", "Put the black bowl in the sink." | |||||||||||
21 | Berkeley Cable Routing | Franka | 1,482 | 4.67 | Single Arm | Default | EEF velocity | 3 | 0 | 2 | None | Human VR | No | No | Yes | Table Top | 10 | berkeley_cable_routing | @article{luo2023multistage, author = {Jianlan Luo and Charles Xu and Xinyang Geng and Gilbert Feng and Kuan Fang and Liam Tan and Stefan Schaal and Sergey Levine}, title = {Multi-Stage Cable Routing through Hierarchical Imitation Learning}, journal = {arXiv pre-print}, year = {2023}, url = {https://arxiv.org/abs/2307.08927}, } | luo2023multistage | The robot routes cable through a number of tight-fitting clips mounted on the table. | |||||||||||
22 | Roboturk | Sawyer | 2,144 | 45.39 | Single Arm | Default | EEF Position | 2 | 1 | 0 | Templated | Human VR | No | No | Yes | Table Top | 10 | roboturk | @inproceedings{mandlekar2019scaling, title={Scaling robot supervision to hundreds of hours with roboturk: Robotic manipulation dataset through human reasoning and dexterity}, author={Mandlekar, Ajay and Booher, Jonathan and Spero, Max and Tung, Albert and Gupta, Anchit and Zhu, Yuke and Garg, Animesh and Savarese, Silvio and Fei-Fei, Li}, booktitle={2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, pages={1048--1055}, year={2019}, organization={IEEE} } | mandlekar2019scaling | Sawyer robots flattens laundry, builds towers from bowls and searches objects. | |||||||||||
23 | NYU VINN | Hello Stretch | 435 | 7.12 | Mobile Manipulator | Default | EEF Position | 1 | 0 | 1 | None | Human Kinesthetic | Yes | Yes | No | Kitchen (also toy kitchen), Other Household environments | 3 | nyu_door_opening_surprising_effectiveness | @misc{pari2021surprising, title={The Surprising Effectiveness of Representation Learning for Visual Imitation}, author={Jyothish Pari and Nur Muhammad Shafiullah and Sridhar Pandian Arunachalam and Lerrel Pinto}, year={2021}, eprint={2112.01511}, archivePrefix={arXiv}, primaryClass={cs.RO} } | pari2021surprising | The robot opens cabinet doors for a variety of cabinets. | |||||||||||
24 | Austin VIOLA | Franka | 135 | 10.4 | Single Arm | Default | EEF Position | 2 | 0 | 1 | Templated | Human Spacemouse | No | No | Yes | Table Top | 20 | viola | @article{zhu2022viola, title={VIOLA: Imitation Learning for Vision-Based Manipulation with Object Proposal Priors}, author={Zhu, Yifeng and Joshi, Abhishek and Stone, Peter and Zhu, Yuke}, journal={6th Annual Conference on Robot Learning (CoRL)}, year={2022} } | zhu2022viola | The robot performs various household-like tasks, such as setting up the table, or making coffee using a coffee machine. | |||||||||||
25 | Berkeley Autolab UR5 | UR5 | 896 | 76.39 | Single Arm | Robotiq 2F-85 | EEF Position | 2 | 1 | 1 | Templated | Human Spacemouse | No | No | Yes | Table Top | 5 | berkeley_autolab_ur5 | @misc{BerkeleyUR5Website, title = {Berkeley {UR5} Demonstration Dataset}, author = {Lawrence Yunliang Chen and Simeon Adebola and Ken Goldberg}, howpublished = {https://sites.google.com/view/berkeley-ur5/home}, } | BerkeleyUR5Website | The data consists of 4 robot manipulation tasks: simple pick-and-place of a stuffed animal between containers, sweeping a cloth, stacking cups, and a more difficult pick-and-place of a bottle that requires precise grasp and 6DOF rotation | |||||||||||
26 | TOTO Benchmark | Franka | 901 | 127.66 | Single Arm | #ERROR! | Joint position | 1 | 0 | 0 | None | The dataset is collected in 3 ways: Human teleoperation -- VR Teleop, trained state-based BC policies, and trajectory replay with noise | Yes | No | Yes | Table Top | 30 | toto | @inproceedings{zhou2023train, author={Zhou, Gaoyue and Dean, Victoria and Srirama, Mohan Kumar and Rajeswaran, Aravind and Pari, Jyothish and Hatch, Kyle and Jain, Aryan and Yu, Tianhe and Abbeel, Pieter and Pinto, Lerrel and Finn, Chelsea and Gupta, Abhinav}, booktitle={2023 IEEE International Conference on Robotics and Automation (ICRA)}, title={Train Offline, Test Online: A Real Robot Learning Benchmark}, year={2023}, } | zhou2023train | The TOTO Benchmark Dataset contains trajectories of two tasks: scooping and pouring. For scooping, the objective is to scoop material from a bowl into the spoon. For pouring, the goal is to pour some material into a target cup on the table. | |||||||||||
27 | Language Table | xArm | 442,226 | 399.22 | Single Arm | Stick for pushing | EEF Position | 1 | 0 | 0 | Natural | Human VR | No | No | Yes | Table Top | 10 | language_table | @article{lynch2023interactive, title={Interactive language: Talking to robots in real time}, author={Lynch, Corey and Wahid, Ayzaan and Tompson, Jonathan and Ding, Tianli and Betker, James and Baruch, Robert and Armstrong, Travis and Florence, Pete}, journal={IEEE Robotics and Automation Letters}, year={2023}, publisher={IEEE} } | lynch2023interactive | Robot pushed blocks of different geometric shapes on table top. | |||||||||||
28 | Columbia PushT Dataset | UR5 | 122 | 2.8 | Single Arm | 3D printed stick | EEF Position | 5 | 0 | 1 | None | Human VR | No | No | Yes | Table Top | 10 | columbia_cairlab_pusht_real | @inproceedings{chi2023diffusionpolicy, title={Diffusion Policy: Visuomotor Policy Learning via Action Diffusion}, author={Chi, Cheng and Feng, Siyuan and Du, Yilun and Xu, Zhenjia and Cousineau, Eric and Burchfiel, Benjamin and Song, Shuran}, booktitle={Proceedings of Robotics: Science and Systems (RSS)}, year={2023} } | chi2023diffusionpolicy | The robot pushes a T-shaped block into a fixed goal pose, and then move to an fixed exit zone. | |||||||||||
29 | Stanford Kuka Multimodal | Kuka iiwa | 3,000 | 31.98 | Single Arm | EEF Position | 1 | 0 | 0 | None | Expert Policy | Yes | No | Yes | Table Top | 20 | stanford_kuka_multimodal_dataset_converted_externally_to_rlds | @inproceedings{lee2019icra, title={Making sense of vision and touch: Self-supervised learning of multimodal representations for contact-rich tasks}, author={Lee, Michelle A and Zhu, Yuke and Srinivasan, Krishnan and Shah, Parth and Savarese, Silvio and Fei-Fei, Li and Garg, Animesh and Bohg, Jeannette}, booktitle={2019 IEEE International Conference on Robotics and Automation (ICRA)}, year={2019}, url={https://arxiv.org/abs/1810.10191} } | lee2019icra | The robot learns to insert differently-shaped pegs into differently-shaped holes with low tolerances (~2mm). | ||||||||||||
30 | NYU ROT | xArm | 14 | 0.01 | Single Arm | Default | EEF Position | 1 | 0 | 0 | Templated | Human Joystick | No | No | Yes | Table Top | 3 | nyu_rot_dataset_converted_externally_to_rlds | @inproceedings{haldar2023watch, title={Watch and match: Supercharging imitation with regularized optimal transport}, author={Haldar, Siddhant and Mathur, Vaibhav and Yarats, Denis and Pinto, Lerrel}, booktitle={Conference on Robot Learning}, pages={32--43}, year={2023}, organization={PMLR} } | haldar2023watch | The robot arm performs diverse manipulation tasks on a tabletop such an box opening, cup stacking, and pouring, among others. | |||||||||||
31 | Stanford HYDRA | Franka | 550 | 72.48 | Single Arm | Default | EEF Position | 2 | 0 | 1 | Templated | Human VR | No | No | Yes | Table Top, Kitchen (also toy kitchen) | 10 | stanford_hydra_dataset_converted_externally_to_rlds | @article{belkhale2023hydra, title={HYDRA: Hybrid Robot Actions for Imitation Learning}, author={Belkhale, Suneel and Cui, Yuchen and Sadigh, Dorsa}, journal={arxiv}, year={2023} } | belkhale2023hydra | The robot performs the following tasks in corresponding environment: making a cup of coffee using the keurig machine; making a toast using the oven; sorting dishes onto the dish rack. | |||||||||||
32 | Austin BUDS | Franka | 50 | 1.49 | Single Arm | Default | EEF Position | 2 | 0 | 1 | None | Human Spacemouse | No | No | Yes | Table Top | 20 | austin_buds_dataset_converted_externally_to_rlds | @article{zhu2022bottom, title={Bottom-Up Skill Discovery From Unsegmented Demonstrations for Long-Horizon Robot Manipulation}, author={Zhu, Yifeng and Stone, Peter and Zhu, Yuke}, journal={IEEE Robotics and Automation Letters}, volume={7}, number={2}, pages={4126--4133}, year={2022}, publisher={IEEE} } | zhu2022bottom | The robot is trying to solve a long-horizon kitchen task by picking up pot, placing the pot in a plate, and push them together using a picked-up tool. | |||||||||||
33 | NYU Franka Play | Franka | 456 | 5.18 | Single Arm | Default | EEF velocity | 2 | 2 | 0 | None | Human VR | No | No | Yes | Kitchen (also toy kitchen) | 3 | nyu_franka_play_dataset_converted_externally_to_rlds | @article{cui2022play, title = {From Play to Policy: Conditional Behavior Generation from Uncurated Robot Data}, author = {Cui, Zichen Jeff and Wang, Yibin and Shafiullah, Nur Muhammad Mahi and Pinto, Lerrel}, journal = {arXiv preprint arXiv:2210.10047}, year = {2022} } | cui2022play | The robot interacts with a toy kitchen doing arbitrary tasks. It opens/closes the microwave door, opens/closes the oven door, turns the stove knobs, and moves the pot between the stove and the sink. | |||||||||||
34 | Maniskill | Franka | 30,000 | 151.05 | Single Arm | Default | EEF Position | 2 | 2 | 2 | Templated | Scripted | Yes | Yes | Yes | Table Top | 20 | maniskill_dataset_converted_externally_to_rlds | @inproceedings{gu2023maniskill2, title={ManiSkill2: A Unified Benchmark for Generalizable Manipulation Skills}, author={Gu, Jiayuan and Xiang, Fanbo and Li, Xuanlin and Ling, Zhan and Liu, Xiqiang and Mu, Tongzhou and Tang, Yihe and Tao, Stone and Wei, Xinyue and Yao, Yunchao and Yuan, Xiaodi and Xie, Pengwei and Huang, Zhiao and Chen, Rui and Su, Hao}, booktitle={International Conference on Learning Representations}, year={2023} } | gu2023maniskill2 | The robot interacts with different objects placed on the plane (ground). The tasks include picking an isolated object or an object from the clutter up and moving it to a goal position, stacking a red cube onto a green cube, inserting a peg into the box, assembling kits, plugging a charger into the outlet on the wall, turning on a faucet. | |||||||||||
35 | Furniture Bench | Franka | 5100 | 110 | Single Arm | Default | EEF velocity | 2 | 0 | 1 | Templated | Human VR | No | No | Yes | Table Top | 10 | furniture_bench_dataset_converted_externally_to_rlds | @inproceedings{heo2023furniturebench, title={FurnitureBench: Reproducible Real-World Benchmark for Long-Horizon Complex Manipulation}, author={Minho Heo and Youngwoon Lee and Doohyun Lee and Joseph J. Lim}, booktitle={Robotics: Science and Systems}, year={2023} } | heo2023furniturebench | The robot assembles one of 9 3D-printed furniture models on the table, which requires grasping, inserting, and screwing. | |||||||||||
36 | CMU Franka Exploration | Franka | 200 | 0.59 | Single Arm | Default | EEF Position | 1 | 0 | 0 | Templated | Expert Policy | Yes | No | No | Kitchen (also toy kitchen) | 10 | cmu_franka_exploration_dataset_converted_externally_to_rlds | @inproceedings{mendonca2023structured, title={Structured World Models from Human Videos}, author={Mendonca, Russell and Bahl, Shikhar and Pathak, Deepak}, journal={RSS}, year={2023} } | mendonca2023structured | Franka exploring kitchen environment, lifting knife and vegetable and opening cabinet. | |||||||||||
37 | UCSD Kitchen | xArm | 150 | 1.33 | Single Arm | Default | EEF Position | 1 | 0 | 0 | Natural | Human VR | No | No | Yes | Kitchen (also toy kitchen) | 2 | ucsd_kitchen_dataset_converted_externally_to_rlds | @ARTICLE{ucsd_kitchens, author = {Ge Yan, Kris Wu, and Xiaolong Wang}, title = {{ucsd kitchens Dataset}}, year = {2023}, month = {August} } | ucsd_kitchens | The dataset offers a comprehensive set of real-world robotic interactions, involving natural language instructions and complex manipulations with kitchen objects. | |||||||||||
38 | UCSD Pick Place | xArm | 1,355 | 3.53 | Single Arm | Default | EEF velocity | 1 | 0 | 0 | Templated | Expert Policy | Yes | No | Yes | Table Top, Kitchen (also toy kitchen) | 3 | ucsd_pick_and_place_dataset_converted_externally_to_rlds | @preprint{Feng2023Finetuning, title={Finetuning Offline World Models in the Real World}, author={Yunhai Feng, Nicklas Hansen, Ziyan Xiong, Chandramouli Rajagopalan, Xiaolong Wang}, year={2023} } | Feng2023Finetuning | The robot performs pick and place tasks in table top and kitchen scenes. The dataset contains a variety of visual variations. | |||||||||||
39 | Austin Sailor | Franka | 250 | 18.85 | Single Arm | Default | EEF velocity | 2 | 0 | 1 | None | Human Spacemouse | No | No | Yes | Table Top, Kitchen (also toy kitchen) | 20 | austin_sailor_dataset_converted_externally_to_rlds | @inproceedings{nasiriany2022sailor, title={Learning and Retrieval from Prior Data for Skill-based Imitation Learning}, author={Soroush Nasiriany and Tian Gao and Ajay Mandlekar and Yuke Zhu}, booktitle={Conference on Robot Learning (CoRL)}, year={2022} } | nasiriany2022sailor | The robot interacts with diverse objects in a toy kitchen. It picks and places food items, a pan, and pot. | |||||||||||
40 | Austin Sirius | Franka | 600 | 6.55 | Single Arm | Default | EEF velocity | 2 | 0 | 1 | None | Human Spacemouse | No | No | Yes | Table Top | 20 | austin_sirius_dataset_converted_externally_to_rlds | @inproceedings{liu2022robot, title = {Robot Learning on the Job: Human-in-the-Loop Autonomy and Learning During Deployment}, author = {Huihan Liu and Soroush Nasiriany and Lance Zhang and Zhiyao Bao and Yuke Zhu}, booktitle = {Robotics: Science and Systems (RSS)}, year = {2023} } | liu2022robot | The dataset comprises two tasks, kcup and gear. The kcup task requires opening the kcup holder, inserting the kcup into the holder, and closing the holder. The gear task requires inserting the blue gear onto the right peg, followed by inserting the smaller red gear. | |||||||||||
41 | BC-Z | Google Robot | 39,350 | 80.54 | Mobile Manipulator | Default | EEF Position | 1 | 0 | 0 | Templated | Human VR | Yes | No | Yes | Table Top | 10 | bc_z | @inproceedings{jang2021bc, title={{BC}-Z: Zero-Shot Task Generalization with Robotic Imitation Learning}, author={Eric Jang and Alex Irpan and Mohi Khansari and Daniel Kappler and Frederik Ebert and Corey Lynch and Sergey Levine and Chelsea Finn}, booktitle={5th Annual Conference on Robot Learning}, year={2021}, url={https://openreview.net/forum?id=8kbp23tSGYv}} | jang2021bc | The robot attempts picking, wiping, and placing tasks on a diverse set of objects on a tabletop, along with a few challenging tasks like stacking cups on top of each other. | |||||||||||
42 | USC Cloth Sim | Franka | 1,000 | 0.25 | Single Arm | N/A: Robot agnostic sim data | EEF Position | 1 | 0 | 0 | None | Scripted | No | No | No | Table Top, Kitchen (also toy kitchen) | 10 | usc_cloth_sim_converted_externally_to_rlds | @article{salhotra2022dmfd, author={Salhotra, Gautam and Liu, I-Chun Arthur and Dominguez-Kuhne, Marcus and Sukhatme, Gaurav S.}, journal={IEEE Robotics and Automation Letters}, title={Learning Deformable Object Manipulation From Expert Demonstrations}, year={2022}, volume={7}, number={4}, pages={8775-8782}, doi={10.1109/LRA.2022.3187843} } | salhotra2022dmfd | The robot manipulates a deformable object (cloth on a tabletop) along a diagonal. | |||||||||||
43 | Tokyo PR2 Fridge Opening | PR2 | 64 | 0.35 | Single Arm | Default | EEF Position | 1 | 0 | 0 | None | Human VR | Yes | Yes | Yes | Kitchen (also toy kitchen) | 10 | utokyo_pr2_opening_fridge_converted_externally_to_rlds | @misc{oh2023pr2utokyodatasets, author={Jihoon Oh and Naoaki Kanazawa and Kento Kawaharazuka}, title={X-Embodiment U-Tokyo PR2 Datasets}, year={2023}, url={https://github.com/ojh6404/rlds_dataset_builder}, } | oh2023pr2utokyodatasets | The PR2 robot opens fridge. | |||||||||||
44 | Tokyo PR2 Tabletop Manipulation | PR2 | 192 | 0.81 | Single Arm | Default | EEF Position | 1 | 0 | 0 | None | Human VR | Yes | Yes | Yes | Table Top | 10 | utokyo_pr2_tabletop_manipulation_converted_externally_to_rlds | @misc{oh2023pr2utokyodatasets, author={Jihoon Oh and Naoaki Kanazawa and Kento Kawaharazuka}, title={X-Embodiment U-Tokyo PR2 Datasets}, year={2023}, url={https://github.com/ojh6404/rlds_dataset_builder}, } | oh2023pr2utokyodatasets | The PR2 robot conducts manipulation for table top object. It conducts pick-and-place of bread and grape and folds cloth. | |||||||||||
45 | Saytap | Unitree A1 | 20 | 0.05 | Quadrupedal Robot | Joint position | 0 | 0 | 0 | Natural | Expert Policy | No | No | Yes | Indoor, on a flat floor | 50 | utokyo_saytap_converted_externally_to_rlds | @article{saytap2023, author = {Yujin Tang and Wenhao Yu and Jie Tan and Heiga Zen and Aleksandra Faust and Tatsuya Harada}, title = {SayTap: Language to Quadrupedal Locomotion}, eprint = {arXiv:2306.07580}, url = {https://saytap.github.io}, note = "{https://saytap.github.io}", year = {2023} } | saytap2023 | A Unitree Go1 robot follows human command in natural language (e.g., "trot forward slowly") | ||||||||||||
46 | UTokyo xArm PickPlace | xArm | 95 | 1.29 | Single Arm | Default | EEF Position | 4 | 0 | 1 | None | Human Puppeteering | No | No | Yes | Table Top | 10 | utokyo_xarm_pick_and_place_converted_externally_to_rlds | @misc{matsushima2023weblab, title={Weblab xArm Dataset}, author={Tatsuya Matsushima and Hiroki Furuta and Yusuke Iwasawa and Yutaka Matsuo}, year={2023}, } | matsushima2023weblab | The robot picks up a white plate, and then places it on the red plate. | |||||||||||
47 | UTokyo xArm Bimanual | xArm Bimanual | 70 | 0.14 | Bi-Manual | Default | EEF Position | 1 | 0 | 0 | None | Human Puppeteering | No | No | Yes | Table Top | 10 | utokyo_xarm_bimanual_converted_externally_to_rlds | @misc{matsushima2023weblab, title={Weblab xArm Dataset}, author={Tatsuya Matsushima and Hiroki Furuta and Yusuke Iwasawa and Yutaka Matsuo}, year={2023}, } | matsushima2023weblab | The robots reach a towel on the table. They also unfold a wrinkled towel. | |||||||||||
48 | Robonet | Multi-Robot | 82,432 | 799.91 | Single Arm | Default grippers + Weiss gripper (for sawyer) | EEF velocity | 3 | 0 | 0 | None | Scripted | Yes | No | Yes | Table Top | 1 | robo_net | @inproceedings{dasari2019robonet, title={RoboNet: Large-Scale Multi-Robot Learning}, author={Sudeep Dasari and Frederik Ebert and Stephen Tian and Suraj Nair and Bernadette Bucher and Karl Schmeckpeper and Siddharth Singh and Sergey Levine and Chelsea Finn}, year={2019}, eprint={1910.11215}, archivePrefix={arXiv}, primaryClass={cs.RO}, booktitle={CoRL 2019: Volume 100 Proceedings of Machine Learning Research} } | dasari2019robonet | The robot interacts with the objects in a bin placed in front of it | |||||||||||
49 | Berkeley MVP Data | xArm | 480 | 12.34 | Single Arm | Default | Joint position | 1 | 0 | 1 | Templated | Human VR | No | No | Yes | Table Top, Kitchen (also toy kitchen) | 5 | berkeley_mvp_converted_externally_to_rlds | @InProceedings{Radosavovic2022, title = {Real-World Robot Learning with Masked Visual Pre-training}, author = {Ilija Radosavovic and Tete Xiao and Stephen James and Pieter Abbeel and Jitendra Malik and Trevor Darrell}, booktitle = {CoRL}, year = {2022} } | Radosavovic2022 | Basic motor control tasks (reach, push, pick) on table top and toy environments (toy kitchen, toy fridge). | |||||||||||
50 | Berkeley RPT Data | Franka | 960 | 40.64 | Single Arm | Default | Joint position | 3 | 0 | 1 | Templated | Scripted | No | No | Yes | Table Top | 30 | berkeley_rpt_converted_externally_to_rlds | @article{Radosavovic2023, title={Robot Learning with Sensorimotor Pre-training}, author={Ilija Radosavovic and Baifeng Shi and Letian Fu and Ken Goldberg and Trevor Darrell and Jitendra Malik}, year={2023}, journal={arXiv:2306.10007} } | Radosavovic2023 | Picking, stacking, destacking, and bin picking with variations in objects. | |||||||||||
51 | KAIST Nonprehensile Objects | Franka | 201 | 11.71 | Single Arm | Default | EEF Position | 1 | 0 | 0 | Natural | Expert Policy | Yes | No | Yes | Table Top | 10 | kaist_nonprehensile_converted_externally_to_rlds | @article{kimpre, title={Pre-and post-contact policy decomposition for non-prehensile manipulation with zero-shot sim-to-real transfer}, author={Kim, Minchan and Han, Junhyek and Kim, Jaehyung and Kim, Beomjoon}, booktitle={2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, year={2023}, organization={IEEE} } | kimpre | The robot performs various non-prehensile manipulation tasks in a tabletop environment. It translates and reorients diverse real-world and 3d-printed objects to a target 6dof pose. | |||||||||||
52 | QUT Dynamic Grasping | Franka | 812 | Single Arm | Default | EEF Position | 3 | 0 | 2 | None | Scripted | No | No | Yes | Table Top | 30 | @misc{rana2023dgrasp, author = {Krishan Rana, Ben Burgess-Limerick, Jad Abou-Chakra, Niko S{\"u}nderhauf}, title = {DGrasp: A Large Scale Dataset for Dynamic Grasping of Moving Objects.}, year = {2023}} | rana2023dgrasp | The robot grasps an object that moves around continuously and randomly along the XY plane. | |||||||||||||
53 | Stanford MaskVIT Data | Sawyer | 9,200 | 76.17 | Single Arm | 3D printed grippper from https://github.com/robertocalandra/sawyer-printable | EEF Position | 1 | 0 | 0 | None | Scripted | Yes | No | Yes | Table Top | N/A, actions are run until robot comes to rest near the target position (quasistatic assumption) | stanford_mask_vit_converted_externally_to_rlds | @inproceedings{gupta2022maskvit, title={MaskViT: Masked Visual Pre-Training for Video Prediction}, author={Agrim Gupta and Stephen Tian and Yunzhi Zhang and Jiajun Wu and Roberto MartÃn-MartÃn and Li Fei-Fei}, booktitle={International Conference on Learning Representations}, year={2022} } | gupta2022maskvit | The robot randomly pushes and picks objects in a bin, which include stuffed toys, plastic cups and toys, etc, and are periodically shuffled. | |||||||||||
54 | LSMO Dataset | Cobotta | 50 | 0.33 | Single Arm | Default | EEF velocity | 1 | 0 | 0 | Natural | Expert Policy | No | No | Yes | Table Top | 10 | tokyo_u_lsmo_converted_externally_to_rlds | @Article{Osa22, author = {Takayuki Osa}, journal = {The International Journal of Robotics Research}, title = {Motion Planning by Learning the Solution Manifold in Trajectory Optimization}, year = {2022}, number = {3}, pages = {291--311}, volume = {41}, } | Osa22 | The robot avoids obstacle on the table and reaches the target object. | |||||||||||
55 | DLR Sara Pour Dataset | DLR SARA | 100 | 2.92 | Single Arm | Robotiq 2F-85 | EEF velocity | 1 | 0 | 0 | None | Expert Policy | Yes | No | Yes | Table Top, Household objects | 10 | dlr_sara_pour_converted_externally_to_rlds | @inproceedings{padalkar2023guiding, title={Guiding Reinforcement Learning with Shared Control Templates}, author={Padalkar, Abhishek and Quere, Gabriel and Steinmetz, Franz and Raffin, Antonin and Nieuwenhuisen, Matthias and Silv{\'e}rio, Jo{\~a}o and Stulp, Freek}, booktitle={40th IEEE International Conference on Robotics and Automation, ICRA 2023}, year={2023}, organization={IEEE} } | padalkar2023guiding | The robot learns to pour ping-pong balls from a cup held in the end-effector into the cup placed on the table. | |||||||||||
56 | DLR Sara Grid Clamp Dataset | DLR SARA | 100 | 1.65 | Single Arm | Robotiq 2F-140 | EEF velocity | 1 | 0 | 0 | None | Expert Policy | Yes | No | Yes | Table Top, Workshop environment | 10 | dlr_sara_grid_clamp_converted_externally_to_rlds | @article{padalkar2023guided, title={A guided reinforcement learning approach using shared control templates for learning manipulation skills in the real world}, author={Padalkar, Abhishek and Quere, Gabriel and Raffin, Antonin and Silv{\'e}rio, Jo{\~a}o and Stulp, Freek}, journal={Research square preprint rs-3289569/v1}, year={2023} } | padalkar2023guided | The robot learns to place the grid clamp in the grids on the table. | |||||||||||
57 | DLR Wheelchair Shared Control | DLR EDAN | 100 | 3.09 | Single Arm | CLASH hand (https://www.dlr.de/rm/en/desktopdefault.aspx/tabid-8178/#gallery/35438) | EEF Position | 1 | 0 | 0 | Templated | Human teleoperation using Shared Control Templates | Yes | No | Yes | Table Top, shelf | 5 | dlr_edan_shared_control_converted_externally_to_rlds | @inproceedings{vogel_edan_2020, title = {EDAN - an EMG-Controlled Daily Assistant to Help People with Physical Disabilities}, language = {en}, booktitle = {2020 {IEEE}/{RSJ} {International} {Conference} on {Intelligent} {Robots} and {Systems} ({IROS})}, author = {Vogel, Jörn and Hagengruber, Annette and Iskandar, Maged and Quere, Gabriel and Leipscher, Ulrike and Bustamante, Samuel and Dietrich, Alexander and Hoeppner, Hannes and Leidner, Daniel and Albu-Schäffer, Alin}, year = {2020} } @inproceedings{quere_shared_2020, address = {Paris, France}, title = {Shared {Control} {Templates} for {Assistive} {Robotics}}, language = {en}, booktitle = {2020 {IEEE} {International} {Conference} on {Robotics} and {Automation} ({ICRA})}, author = {Quere, Gabriel and Hagengruber, Annette and Iskandar, Maged and Bustamante, Samuel and Leidner, Daniel and Stulp, Freek and Vogel, Joern}, year = {2020}, pages = {7}, } | vogel_edan_2020, quere_shared_2020 | The robot grasps a set of different objects in a table top and a shelf. | |||||||||||
58 | ASU TableTop Manipulation | UR5 | 110 | 0.72 | Single Arm | Robotiq 2F-85 | EEF Position | 1 | 0 | 0 | Templated | Scripted | No | No | Yes | Table Top | 12.5 | asu_table_top_converted_externally_to_rlds | @inproceedings{zhou2023modularity, title={Modularity through Attention: Efficient Training and Transfer of Language-Conditioned Policies for Robot Manipulation}, author={Zhou, Yifan and Sonawani, Shubham and Phielipp, Mariano and Stepputtis, Simon and Amor, Heni}, booktitle={Conference on Robot Learning}, pages={1684--1695}, year={2023}, organization={PMLR} } @article{zhou2023learning, title={Learning modular language-conditioned robot policies through attention}, author={Zhou, Yifan and Sonawani, Shubham and Phielipp, Mariano and Ben Amor, Heni and Stepputtis, Simon}, journal={Autonomous Robots}, pages={1--21}, year={2023}, publisher={Springer} } | zhou2023modularity, zhou2023learning | The robot interacts with a few objects on a table. It picks up, pushes forward, or rotates the objects. | |||||||||||
59 | Stanford Robocook | Franka | 2,460 | 124.62 | Single Arm | Default | EEF Position | 4 | 4 | 0 | Templated | Scripted | Yes | Yes | Yes | Table Top, Kitchen (also toy kitchen) | 5 | stanford_robocook_converted_externally_to_rlds | @article{shi2023robocook, title={RoboCook: Long-Horizon Elasto-Plastic Object Manipulation with Diverse Tools}, author={Shi, Haochen and Xu, Huazhe and Clarke, Samuel and Li, Yunzhu and Wu, Jiajun}, journal={arXiv preprint arXiv:2306.14447}, year={2023} } | shi2023robocook | In the first task, the robot pinches the dough with an asymmetric gripper / two-rod symmetric gripper / two-plane symmetric gripper. In the second task, the robot presses the dough with a circle press / square press / circle punch / square punch. In the third task, the robot rolls the dough with a large roller / small roller. | |||||||||||
60 | ETH Agent Affordances | Franka | 120 | 17.27 | Mobile Manipulator | EEF Position | 0 | 1 | 0 | Templated | Expert Policy | Yes | No | Yes | Kitchen (also toy kitchen) | 66.6 | eth_agent_affordances | @inproceedings{schiavi2023learning, title={Learning agent-aware affordances for closed-loop interaction with articulated objects}, author={Schiavi, Giulio and Wulkop, Paula and Rizzi, Giuseppe and Ott, Lionel and Siegwart, Roland and Chung, Jen Jen}, booktitle={2023 IEEE International Conference on Robotics and Automation (ICRA)}, pages={5916--5922}, year={2023}, organization={IEEE} } | schiavi2023learning | The robot opens and closes an oven, starting from different initial positions and door angles. | ||||||||||||
61 | Imperial Wrist Cam | Sawyer | 170 | 0.08 | Single Arm | Robotiq 2F-85 | EEF Position | 1 | 0 | 1 | Natural | Human Kinesthetic | No | No | No | Table Top | 10 | imperialcollege_sawyer_wrist_cam | The robot interacts with different everyday objects performing tasks such as grasping, inserting, opening, stacking, etc. | |||||||||||||
62 | CMU Franka Pick-Insert Data | Franka | 520 | 50.29 | Single Arm | Default | EEF Position | 2 | 0 | 1 | Templated | Human VR | No | For a subset of the cameras | Yes | Table Top | 20 | iamlab_cmu_pickup_insert_converted_externally_to_rlds | @inproceedings{saxena2023multiresolution, title={Multi-Resolution Sensing for Real-Time Control with Vision-Language Models}, author={Saumya Saxena and Mohit Sharma and Oliver Kroemer}, booktitle={7th Annual Conference on Robot Learning}, year={2023}, url={https://openreview.net/forum?id=WuBv9-IGDUA} } | saxena2023multiresolution | The robot tries to pick up different shaped objects placed in front of it. It also tries to insert particular objects into a cylindrical peg. | |||||||||||
63 | QUT Dexterous Manpulation | Franka | 200 | Mobile Manipulator | Default | EEF Position | 2 | 0 | 1 | Natural | Human VR | No | No | Yes | Table Top | 30 | qut_dexterous_manipulation | @misc{ceola2023lhmanip, author = {Federico Ceola, Krishan Rana, Niko S{\"u}nderhauf}, title = {LHManip: A Dataset for Long Horizon Manipulation Tasks.}, year = {2023}} | ceola2023lhmanip | The robot performs some tasks in a tabletop setting. It sorts dishes and objects, cooks and serves food, sets the table, throws away trash paper, rolls dices, waters plants, stacks toy blocks. | ||||||||||||
64 | MPI Muscular Proprioception | PAMY2 | 256 | Single Arm | Desired pressures for the artificial muscles | 0 | 0 | 0 | None | Scripted | Yes | No | Yes | The robot is alone in the environment, there are no other objects in the workspace. | 500 | @article{guist2023robust, title={A Robust Open-source Tendon-driven Robot Arm for Learning Control of Dynamic Motions}, author={Guist, Simon and Schneider, Jan and Ma, Hao and Berenz, Vincent and Martus, Julian and Gr{\"u}ninger, Felix and M{\"u}hlebach, Michael and Fiene, Jonathan and Sch{\"o}lkopf, Bernhard and B{\"u}chler, Dieter}, journal={arXiv preprint arXiv:2307.02654}, year={2023} } | guist2023robust | There is no task that the robot solves. It executes a combination of random multisine signals of target pressures, as well as fixed target pressures. | ||||||||||||||
65 | UIUC D3Field | Kinova Gen3 | 196 | 15.82 | Single Arm | Robotiq 2F-85 | EEF Position | 4 | 4 | 0 | None | Scripted | Yes | No | Yes | Table Top | 1 | uiuc_d3field | @article{wang2023d3field, title={D^3Field: Dynamic 3D Descriptor Fields for Generalizable Robotic Manipulation}, author={Wang, Yixuan and Li, Zhuoran and Zhang, Mingtong and Driggs-Campbell, Katherine and Wu, Jiajun and Fei-Fei, Li and Li, Yunzhu}, journal={arXiv preprint arXiv:}, year={2023}, } | wang2023d3field | The robot completes tasks specified by the goal image, including organizing utensils, shoes, mugs. | |||||||||||
66 | Austin Mutex | Franka | 1,500 | 20.79 | Single Arm | Default | EEF Position | 2 | 0 | 1 | Natural Language annotations generate with GPT4 and followed by human correction. | Human Spacemouse | No | No | Yes | Table Top | 20 | utaustin_mutex | @inproceedings{shah2023mutex, title={{MUTEX}: Learning Unified Policies from Multimodal Task Specifications}, author={Rutav Shah and Roberto Mart{\'\i}n-Mart{\'\i}n and Yuke Zhu}, booktitle={7th Annual Conference on Robot Learning}, year={2023}, url={https://openreview.net/forum?id=PwqiqaaEzJ} } | shah2023mutex | The Mutex dataset involves a diverse range of tasks in a home environment, encompassing pick and place tasks like "putting bread on a plate," as well as contact-rich tasks such as "opening an air fryer and putting a bowl with dogs in it" or "taking out a tray from the oven and placing bread on it." | |||||||||||
67 | Berkeley Fanuc Manipulation | Fanuc Mate | 415 | 8.85 | Single Arm | Default | EEF Position | 2 | 0 | 1 | Natural | Human VR | Yes | No | Yes | Table Top | 10 | berkeley_fanuc_manipulation | @article{fanuc_manipulation2023, title={Fanuc Manipulation: A Dataset for Learning-based Manipulation with FANUC Mate 200iD Robot}, author={Zhu, Xinghao and Tian, Ran and Xu, Chenfeng and Ding, Mingyu and Zhan, Wei and Tomizuka, Masayoshi}, year={2023}, } | fanuc_manipulation2023 | A Fanuc robot performs various manipulation tasks. For example, it opens drawers, picks up objects, closes doors, closes computers, and pushes objects to desired locations. | |||||||||||
68 | CMU Food Manipulation | Franka | 4,200 | 720 | Single Arm | Default | EEF Position | 3 | 0 | 2 | Templated | Scripted | No | No | Yes | Table Top | 10 | cmu_playing_with_food | @inproceedings{sawhney2021playing, title={Playing with food: Learning food item representations through interactive exploration}, author={Sawhney, Amrita and Lee, Steven and Zhang, Kevin and Veloso, Manuela and Kroemer, Oliver}, booktitle={Experimental Robotics: The 17th International Symposium}, pages={309--322}, year={2021}, organization={Springer} } | sawhney2021playing | Robot interacting with different food items. | |||||||||||
69 | CMU Play Fusion | Franka | 576 | 6.68 | Single Arm | Robotiq hand-e | EEF Position | 1 | 0 | 0 | Natural | Human VR | No | No | Yes | Table Top, Kitchen (also toy kitchen) | 5 | cmu_play_fusion | @inproceedings{chen2023playfusion, title={PlayFusion: Skill Acquisition via Diffusion from Language-Annotated Play}, author={Chen, Lili and Bahl, Shikhar and Pathak, Deepak}, booktitle={CoRL}, year={2023} } | chen2023playfusion | The robot plays with 3 complex scenes: a grill with many cooking objects like toaster, pan, etc. It has to pick, open, place, close. It has to set a table, move plates, cups, utensils. And it has to place dishes in the sink, dishwasher, hand cups etc. | |||||||||||
70 | CMU Stretch | Hello Stretch | 135 | 0.71 | Mobile Manipulator | Default | EEF Position | 1 | 0 | 0 | Templated | Expert Policy | No | No | Yes | Kitchen (also toy kitchen), Other Household environments | 10 | cmu_stretch | @inproceedings{bahl2023affordances, title={Affordances from Human Videos as a Versatile Representation for Robotics}, author={Bahl, Shikhar and Mendonca, Russell and Chen, Lili and Jain, Unnat and Pathak, Deepak}, booktitle={CVPR}, year={2023} } @article{mendonca2023structured, title={Structured World Models from Human Videos}, author={Mendonca, Russell and Bahl, Shikhar and Pathak, Deepak}, journal={CoRL}, year={2023} } | bahl2023affordances, mendonca2023structured | Robot interacting with different household environments. | |||||||||||
71 | RECON | Jackal | 11,830 | 18.73 | Wheeled Robot | EEF velocity | 2 | 1 | 1 | None | Scripted | Yes | No | No | Outdoors | 3 | berkeley_gnm_recon | @inproceedings{shah2021rapid, title={{Rapid Exploration for Open-World Navigation with Latent Goal Models}}, author={Dhruv Shah and Benjamin Eysenbach and Nicholas Rhinehart and Sergey Levine}, booktitle={5th Annual Conference on Robot Learning }, year={2021}, url={https://openreview.net/forum?id=d_SWJhyKfVw} } | shah2021rapid | Mobile robot explores outdoor environments using a scripted policy | ||||||||||||
72 | CoryHall | RC Car | 7,328 | 1.39 | Wheeled Robot | EEF velocity | 1 | 0 | 0 | None | Expert Policy | No | No | No | Hallways | 5 | berkeley_gnm_cory_hall | @inproceedings{kahn2018self, title={Self-supervised deep reinforcement learning with generalized computation graphs for robot navigation}, author={Kahn, Gregory and Villaflor, Adam and Ding, Bosen and Abbeel, Pieter and Levine, Sergey}, booktitle={2018 IEEE international conference on robotics and automation (ICRA)}, pages={5129--5136}, year={2018}, organization={IEEE} } | kahn2018self | Small mobile robot navigates hallways in an office building using a learned policy. | ||||||||||||
73 | SACSoN | TurtleBot 2 | 3,000 | 7 | Wheeled Robot | EEF Position | 2 | 1 | 0 | None | Expert Policy | No | Yes | No | Hallways | 10 | berkeley_gnm_sac_son | @article{hirose2023sacson, title={SACSoN: Scalable Autonomous Data Collection for Social Navigation}, author={Hirose, Noriaki and Shah, Dhruv and Sridhar, Ajay and Levine, Sergey}, journal={arXiv preprint arXiv:2306.01874}, year={2023} } | hirose2023sacson | Mobile robot navigates pedestrian-rich environments (e.g. offices, school buildings etc.) and runs a learned policy that may interact with the pedestrians. | ||||||||||||
74 | RoboVQA | Google Robot | 61,153 | 3 embodiments: single-armed robot, single-armed human, single-armed human using grasping tools | Default | EEF Position | 1 | 1 | 0 | Natural | Human VR | No | Yes | Yes | Table Top, Kitchen (also toy kitchen), Other Household environments, Hallways, anything within 3 entire office buildings | 10 | robot_vqa | A robot or a human performs any long-horizon requests from a user within the entirety of 3 office buildings. | ||||||||||||||
75 | ALOHA | ViperX Bimanual | 451 | Bi-Manual | Custom 3D printed | EEF Position | 4 | 0 | 2 | Templated | Human Puppeteering | No | No | Yes | Table Top | 50 | @article{Zhao2023LearningFB, title={Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware}, author={Tony Zhao and Vikash Kumar and Sergey Levine and Chelsea Finn}, journal={RSS}, year={2023}, volume={abs/2304.13705}, url={https://arxiv.org/abs/2304.13705} } | Zhao2023LearningFB | Bi-manual robot performing complex, dexterous tasks like unwrapping candy and putting on shoes. | |||||||||||||
76 | v1.1 | DROID | Franka | 92233 | 1670 | Single Arm | Robotiq 2F-85 | EEF Position | 3 | 3 | 1 | Natural | Human VR | Yes | Yes | Yes | Table Top, Kitchen (also toy kitchen), Other Household environments, Hallways | 15 | droid | @article{khazatsky2024droid, title = {DROID: A Large-Scale In-The-Wild Robot Manipulation Dataset}, author = {Alexander Khazatsky and Karl Pertsch and Suraj Nair and Ashwin Balakrishna and Sudeep Dasari and Siddharth Karamcheti and Soroush Nasiriany and Mohan Kumar Srirama and Lawrence Yunliang Chen and Kirsty Ellis and Peter David Fagan and Joey Hejna and Masha Itkina and Marion Lepert and Yecheng Jason Ma and Patrick Tree Miller and Jimmy Wu and Suneel Belkhale and Shivin Dass and Huy Ha and Arhan Jain and Abraham Lee and Youngwoon Lee and Marius Memmel and Sungjae Park and Ilija Radosavovic and Kaiyuan Wang and Albert Zhan and Kevin Black and Cheng Chi and Kyle Beltran Hatch and Shan Lin and Jingpei Lu and Jean Mercat and Abdul Rehman and Pannag R Sanketi and Archit Sharma and Cody Simpson and Quan Vuong and Homer Rich Walke and Blake Wulfe and Ted Xiao and Jonathan Heewon Yang and Arefeh Yavary and Tony Z. Zhao and Christopher Agia and Rohan Baijal and Mateo Guaman Castro and Daphne Chen and Qiuyu Chen and Trinity Chung and Jaimyn Drake and Ethan Paul Foster and Jensen Gao and David Antonio Herrera and Minho Heo and Kyle Hsu and Jiaheng Hu and Donovon Jackson and Charlotte Le and Yunshuang Li and Kevin Lin and Roy Lin and Zehan Ma and Abhiram Maddukuri and Suvir Mirchandani and Daniel Morton and Tony Nguyen and Abigail O'Neill and Rosario Scalise and Derick Seale and Victor Son and Stephen Tian and Emi Tran and Andrew E. Wang and Yilin Wu and Annie Xie and Jingyun Yang and Patrick Yin and Yunchu Zhang and Osbert Bastani and Glen Berseth and Jeannette Bohg and Ken Goldberg and Abhinav Gupta and Abhishek Gupta and Dinesh Jayaraman and Joseph J Lim and Jitendra Malik and Roberto MartÃn-MartÃn and Subramanian Ramamoorthy and Dorsa Sadigh and Shuran Song and Jiajun Wu and Michael C. Yip and Yuke Zhu and Thomas Kollar and Sergey Levine and Chelsea Finn}, year = {2024}, } | khazatsky2024droid | Various household manipulation tasks | ||||||||||
77 | ConqHose | Spot | 139 | 2.71 | Mobile Manipulator | Default | EEF velocity | 6 | 0 | 0 | Natural | Scripted | Yes | Yes | Yes | Other Household environments, Hallways | 30 | conq_hose_manipulation | @misc{ConqHoseManipData, author={Peter Mitrano and Dmitry Berenson}, title={Conq Hose Manipulation Dataset, v1.15.0}, year={2024}, howpublished={https://sites.google.com/view/conq-hose-manipulation-dataset} } | ConqHoseManipData | The robot grabs, lifts, and drags the end of a vacuum hose around in an office environment. | |||||||||||
78 | DobbE | Hello Stretch | 5208 | 21.1 | Mobile Manipulator | Hello Robot Dex Wrist | EEF Position | 1 | 1 | 1 | Natural | Human collection using tools | No | Yes | No | Kitchen (also toy kitchen), Other Household environments, Hallways | 3.75 | dobbe | @misc{shafiullah2023dobbe, title={On Bringing Robots Home}, author={Nur Muhammad Mahi Shafiullah and Anant Rai and Haritheja Etukuru and Yiqian Liu and Ishan Misra and Soumith Chintala and Lerrel Pinto}, year={2023}, eprint={2311.16098}, archivePrefix={arXiv}, primaryClass={cs.RO} } | shafiullah2023dobbe | The demo collector uses the Stick to collect data from 7 tasks, including door/drawer opening/closing, handle grasping, pick and place, and random play data. | |||||||||||
79 | FMB | Franka | 1804 | 356.5 | Single Arm | Default | EEF velocity | 4 | 4 | 2 | Templated | Human VR | No | Yes | Yes | Table Top | 10 | fmb | @article{luo2024fmb, title={FMB: a Functional Manipulation Benchmark for Generalizable Robotic Learning}, author={Luo, Jianlan and Xu, Charles and Liu, Fangchen and Tan, Liam and Lin, Zipeng and Wu, Jeffrey and Abbeel, Pieter and Levine, Sergey}, journal={arXiv preprint arXiv:2401.08553}, year={2024} } | luo2024fmb | The robot interacts with diverse 3D printed objects, pick them up, reposition, and assemble them | |||||||||||
80 | IO-AI Office PicknPlace | Human | 3847 | 89.33 | Human | Human Hand | EEF Position | 4 | 1 | 1 | Templated | Directly collected on human body with mocap devices and aruco markers | No | Yes | Yes | Table Top | 30 | io_ai_tech | Human interacts with diverse objects in 2 real office table-top scenes. The skill foucs on pick and place. Tasks are like: pick glue from plate, place stapper on desk. We are ready to offer more data on various scenes and skills if this dataset meets your needs. | |||||||||||||
81 | MimicPlay | Franka | 378 | 7.13 | Single Arm | Default | Operational space control (OSC), which is similar to Task space position control but OSC is impedance control with consideration of the mass matrix. OSC is also used by VIOLA. | 3 | 0 | 0 | Dataset does not contain language instruction annotation | Human VR | Yes | No | Yes | Table Top | 15 | mimic_play | @article{wang2023mimicplay, title={Mimicplay: Long-horizon imitation learning by watching human play}, author={Wang, Chen and Fan, Linxi and Sun, Jiankai and Zhang, Ruohan and Fei-Fei, Li and Xu, Danfei and Zhu, Yuke and Anandkumar, Anima}, journal={arXiv preprint arXiv:2302.12422}, year={2023} } | wang2023mimicplay | The robot interacts with various appliances in five different scenes, including a kitchen with an oven; a study desk with a bookshelf and lamp; flowers and a vase; toy sandwich making; and cloth folding. It opens the microwave and drawers; places a book on the shelf; inserts a flower into the vase; and assembles a sandwich. | |||||||||||
82 | MobileALOHA | MobileALOHA | 276 | 47.83 | Mobile Manipulator | Default | Joint position | 3 | 0 | 0 | Templated | Human Puppeteering | No | No | Yes | Table Top, Kitchen (also toy kitchen), Other Household environments, Hallways | 50 | aloha_mobile | @inproceedings{fu2024mobile, author = {Fu, Zipeng and Zhao, Tony Z. and Finn, Chelsea}, title = {Mobile ALOHA: Learning Bimanual Mobile Manipulation with Low-Cost Whole-Body Teleoperation}, booktitle = {arXiv}, year = {2024}, } | fu2024mobile | The robot interacts with diverse appliances in a real kitchen and indoor environments. It wipes spilled wine, stores a heavy pot to be inside wall cabinets, calls an elevator, pushes chairs, and cooks shrimp. | |||||||||||
83 | RoboSet | Franka | 18250 | 178.65 | Single Arm | Robotiq 2F-85 | Joint position | 4 | 4 | 1 | Natural | Human VR | Yes | Yes | Yes | Table Top, Kitchen (also toy kitchen), Other Household environments | 5 | robo_set | @misc{bharadhwaj2023roboagent, title={RoboAgent: Generalization and Efficiency in Robot Manipulation via Semantic Augmentations and Action Chunking}, author={Homanga Bharadhwaj and Jay Vakil and Mohit Sharma and Abhinav Gupta and Shubham Tulsiani and Vikash Kumar}, year={2023}, eprint={2309.01918}, archivePrefix={arXiv}, primaryClass={cs.RO} } | bharadhwaj2023roboagent | "The robot interacts with different objects in kitchen scenes. It performs articulated object manipulation of objects with prismatic joints and hinges. It wipes tables with cloth. It performs pick and place skills, and skills requiring precision like capping and uncapping." | |||||||||||
84 | TidyBot | TidyBot | 24 | 0.02 | Mobile Manipulator | Robotiq 2F-85 | Our dataset specifies a target receptacle for each object | 0 | 0 | 0 | Our dataset specifies object placements in text form | Human writes preferred object placements in text form | No | No | No | Kitchen (also toy kitchen), Other Household environments, living room, bedroom, kitchen, pantry room | N/A | tidybot | @article{wu2023tidybot, title = {TidyBot: Personalized Robot Assistance with Large Language Models}, author = {Wu, Jimmy and Antonova, Rika and Kan, Adam and Lepert, Marion and Zeng, Andy and Song, Shuran and Bohg, Jeannette and Rusinkiewicz, Szymon and Funkhouser, Thomas}, journal = {Autonomous Robots}, year = {2023} } | wu2023tidybot | The robot puts each object into the appropriate receptacle based on user preferences | |||||||||||
85 | VIMA | UR5 | 660103 | 1390 | Single Arm | Suction cup and spatula | Primitive skills (pick-place and push) | 2 | 0 | 0 | Multimodal (image + language) templated instructions | Scripted | No | Yes | No | Table Top | N/A due to use of primitive skills | vima_converted_externally_to_rlds | @inproceedings{jiang2023vima, title = {VIMA: General Robot Manipulation with Multimodal Prompts}, author = {Yunfan Jiang and Agrim Gupta and Zichen Zhang and Guanzhi Wang and Yongqiang Dou and Yanjun Chen and Li Fei-Fei and Anima Anandkumar and Yuke Zhu and Linxi Fan}, booktitle = {Fortieth International Conference on Machine Learning}, year = {2023} } | jiang2023vima | The robot is conditioned on multimodal prompts (mixture of texts, images, and video frames) to conduct tabletop manipulation tasks, ranging from rearrangement to one-shot imitation. | |||||||||||
86 | SPOC | Hello Stretch | 233000 | 765 | Single Arm | Default | Joint position | 2 | 2 | 2 | Scripted language but augmented with LLMs | Scripted | No | Yes | Yes | Kitchen (also toy kitchen), Other Household environments, Hallways, multi room environments | 10 | spoc | @article{spoc2023, author = {Kiana Ehsani, Tanmay Gupta, Rose Hendrix, Jordi Salvador, Luca Weihs, Kuo-Hao Zeng, Kunal Pratap Singh, Yejin Kim, Winson Han, Alvaro Herrasti, Ranjay Krishna, Dustin Schwenk, Eli VanderBilt, Aniruddha Kembhavi}, title = {Imitating Shortest Paths in Simulation Enables Effective Navigation and Manipulation in the Real World}, journal = {arXiv}, year = {2023}, eprint = {2312.02976}, } | spoc2023 | The robot navigates in the environment and performs pick and place with open vocabulary descriptions. | |||||||||||
87 | Plex RoboSuite | Franka | 450 | 1.26 | Single Arm | Default | EEF Position | 5 | 5 | 1 | None | Human Keyboard | No | Yes | Yes | Table Top, Tabletop with sections | 20 | plex_robosuite | @inproceedings{thomas2023plex, title={PLEX: Making the Most of the Available Data for Robotic Manipulation Pretraining}, author={Garrett Thomas and Ching-An Cheng and Ricky Loynd and Felipe Vieira Frujeri and Vibhav Vineet and Mihai Jalobeanu and Andrey Kolobov}, booktitle={CoRL}, year={2023} } | thomas2023plex | Opening a door, stacking 2 cubes, picking and placing various objects to specially designated areas, putting a loop onto a peg. |