| A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1 | System | Domain | Task | Organization(s) | Organization Categorization | Author(s) | Publication date | Year | Reference | Link | Citations | Inclusion criteria | Parameters | Training compute (FLOPs) | log10 compute | Training dataset | Training dataset size (datapoints) | Training duration (days) | |||||||
| 2 | OpenAI Five | Games | Dota 2 | OpenAI | Industry | J Raiman, S Zhang, F Wolski | 13/12/2019 | 2019 | Dota 2 with Large Scale Deep Reinforcement Learning | https://arxiv.org/abs/1912.06680 | 4.54E+02 | SOTA improvement | 1.59E+08 | 6.7E+22 | 22.83 | 4.54E+11 | 180 | ||||||||
| 3 | Jurassic-1-Jumbo | Language | AI21 Labs | Industry | Opher Lieber, Or Sharir, Barak Lenz, Yoav Shoham | 11/08/2021 | 2021 | Jurassic-1: Technical Details and Evaluation | https://uploads-ssl.webflow.com/60fd4503684b466578c0d307/61138924626a6981ee09caf6_jurassic_tech_paper.pdf | 9.00E+00 | 1.78E+11 | 3.7E+23 | 23.57 | 2.25E+11 | 60 | ||||||||||
| 4 | OpenAI Five Rerun | Games | Dota 2 | OpenAI | Industry | Christopher Berner, Greg Brockman, Brooke Chan, Vicki Cheung, Przemysław “Psyho" Dębiak, Christy Dennison, David Farhi, Quirin Fischer, Shariq Hashme, Chris Hesse, Rafal Józefowicz, Scott Gray, Catherine Olsson, Jakub Pachocki, Michael Petrov, Henrique Pondé de Oliveira Pinto, Jonathan Raiman, Tim Salimans, Jeremy Schlatter, Jonas Schneider, Szymon Sidor, Ilya Sutskever, Jie Tang, Filip Wolski, Susan Zhang | 13/12/2019 | 2019 | Dota 2 with Large Scale Deep Reinforcement Learning | https://cdn.openai.com/dota-2.pdf | 3.49E+02 | SOTA improvement | 1.59E+08 | 1.3E+22 | 22.11 | 5.31E+10 | 55 | ||||||||
| 5 | AlphaStar | Games | StarCraft | DeepMind | Industry | Oriol Vinyals,Igor Babuschkin,Wojciech M. Czarnecki,Michaël Mathieu,Andrew Dudzik,Junyoung Chung,David H. Choi,Richard Powell,Timo Ewalds,Petko Georgiev,Junhyuk Oh,Dan Horgan,Manuel Kroiss,Ivo Danihelka,Aja Huang,Laurent Sifre,Trevor Cai,John P. Agapiou,Max Jaderberg,Alexander S. Vezhnevets,Rémi Leblond,Tobias Pohlen,Valentin Dalibard,David Budden,Yury Sulsky,James Molloy,Tom L. Paine,Caglar Gulcehre,Ziyu Wang,Tobias Pfaff,Yuhuai Wu,Roman Ring,Dani Yogatama,Dario Wünsch,Katrina McKinney,Oliver Smith,Tom Schaul,Timothy Lillicrap,Koray Kavukcuoglu,Demis Hassabis,Chris Apps,David Silver | 30/10/2019 | 2019 | Grandmaster level in StarCraft II using multi-agent reinforcement learning | https://www.deepmind.com/blog/alphastar-grandmaster-level-in-starcraft-ii-using-multi-agent-reinforcement-learning | 1.04E+03 | Highly cited | 1.39E+08 | 2.0E+23 | 23.31 | 44 | |||||||||
| 6 | AlphaGo Zero | Games | Go | DeepMind | Industry | D Silver, J Schrittwieser, K Simonyan, I Antonoglou | 19/10/2017 | 2017 | Mastering the game of Go without human knowledge | https://www.researchgate.net/publication/320473480_Mastering_the_game_of_Go_without_human_knowledge | 5.81E+03 | Highly cited | 4.64E+07 | 3.4E+23 | 23.53 | 5.80E+09 | 40 | ||||||||
| 7 | PaLM (540B) | Language | Language modelling | Google Research | Industry | Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev,, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta ,Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, Noah Fiedel | 04/04/2022 | 2022 | PaLM: Scaling Language Modeling with Pathways | https://arxiv.org/abs/2204.02311 | SOTA Improvement | 5.40E+11 | 2.5E+24 | 24.40 | 5.85E+11 | 38.3 | |||||||||
| 8 | Gopher | Language | Language modelling | DeepMind | Industry | Jack W. Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, Eliza Rutherford, Tom Hennigan, Jacob Menick, Albin Cassirer, Richard Powell, George van den Driessche, Lisa Anne Hendricks, Maribeth Rauh, Po-Sen Huang, Amelia Glaese, Johannes Welbl, Sumanth Dathathri, Saffron Huang, Jonathan Uesato, John Mellor, Irina Higgins, Antonia Creswell, Nat McAleese, Amy Wu, Erich Elsen, Siddhant Jayakumar, Elena Buchatskaya, David Budden, Esme Sutherland, Karen Simonyan, Michela Paganini, Laurent Sifre, Lena Martens, Xiang Lorraine Li, Adhiguna Kuncoro, Aida Nematzadeh, Elena Gribovskaya, Domenic Donato, Angeliki Lazaridou, Arthur Mensch, Jean-Baptiste Lespiau, Maria Tsimpoukelli, Nikolai Grigorev, Doug Fritz, Thibault Sottiaux, Mantas Pajarskas, Toby Pohlen, Zhitao Gong, Daniel Toyama, Cyprien de Masson d’Autume, Yujia Li, Tayfun Terzi, Vladimir Mikulik, Igor Babuschkin, Aidan Clark, Diego de Las Casas, Aurelia Guy, Chris Jones, James Bradbury, Matthew Johnson, Blake Hechtman, Laura Weidinger, Iason Gabriel, William Isaac, Ed Lockhart, Simon Osindero, Laura Rimell, Chris Dyer, Oriol Vinyals, Kareem Ayoub, Jeff Stanway, Lorrayne Bennett, Demis Hassabis, Koray Kavukcuoglu and Geoffrey Irving | 8/12/2021 | 2021 | Scaling Language Models: Methods, Analysis & Insights from Training Gopher | https://deepmind.com/blog/article/language-modelling-at-scale | 2.80E+11 | 6.3E+23 | 23.80 | 2.25E+11 | 38.3 | ||||||||||
| 9 | OPT-175B | Language | Language modelling | Meta AI | Industry | Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, Luke Zettlemoyer | 02/05/2022 | 2022 | OPT: Open Pre-trained Transformer Language Models | https://ai.facebook.com/blog/democratizing-access-to-large-scale-language-models-with-opt-175b/ | 1.75E+11 | 7.6E+23 | 23.88 | 1.35E+11 | 38 | ||||||||||
| 10 | Meena | Language | Text autocompletion | Google AI | Industry | Dongling Xiao, Han Zhang, Yukun Li, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang | 28/01/2020 | 2020 | Towards a Human-like Open-Domain Chatbot | https://arxiv.org/abs/2001.09977 | 2.57E+02 | 2.60E+09 | 1.1E+23 | 23.05 | 4.00E+10 | 30 | |||||||||
| 11 | TD-Gammon | Games | Backgammon | IBM | Industry | G Tesauro | 01/05/1992 | 1992 | Practical Issues in Temporal Difference Learning | https://papers.nips.cc/paper/1991/file/68ce199ec2c5517597ce0a4d89620f55-Paper.pdf | 1.34E+03 | Highly cited | 2.50E+04 | 1.8E+13 | 13.26 | 6.30E+06 | 30 | ||||||||
| 12 | AlphaGo Fan | Games | Go | Google DeepMind | Industry | D Silver, A Huang, CJ Maddison, A Guez, L Sifre | 01/10/2015 | 2015 | Mastering the game of Go with deep neural networks and tree search | https://www.nature.com/articles/nature24270.epdf?author_access_token=VJXbVjaSHxFoctQQ4p2k4tRgN0jAjWel9jnR3ZoTv0PVW4gB86EEpGqTRDtpIz-2rmo8-KG06gqVobU5NSCFeHILHcVFUeMsbvwS-lxjqQGg98faovwjxeTUgZAUMnRQ | 5.18E+03 | SOTA improvement | 8.21E+06 | 3.8E+20 | 20.58 | 29 | |||||||||
| 13 | Visualizing CNNs | Vision | NYU | Academia | MD Zeiler, R Fergus | 12/11/2013 | 2013 | Visualizing and Understanding Convolutional Networks | https://arxiv.org/abs/1311.2901 | 1.30E+04 | Highly cited | 5.3E+17 | 17.73 | 12 | |||||||||||
| 14 | Meta Pseudo Labels | Vision | Image Classification | Google AI, Brain team | Industry | Hieu Pham, Zihang Dai, Qizhe Xie, Minh-Thang Luong, and Quoc V. Le | 01/03/2021 | 2021 | Meta pseudo labels | https://arxiv.org/abs/2003.10580 | 1.31E+02 | SOTA Improvement | 4.80E+08 | 2.1E+23 | 23.33 | ImageNet | 1.30E+08 | 11 | |||||||
| 15 | Megatron-LM | Language | NVIDIA | Industry | M Shoeybi, M Patwary, R Puri, P LeGresley | 17/09/2019 | 2019 | Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism | https://arxiv.org/abs/1909.08053 | 2.46E+02 | 8.30E+09 | 9.1E+21 | 21.96 | 3.48E+10 | 9.2 | ||||||||||
| 16 | GNMT | Language | Translation | Industry | Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, Jeffrey Dean | 26/09/2016 | 2016 | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation | https://research.google/pubs/pub45610/ | 4.50E+03 | Highly cited | 2.78E+08 | 6.9E+21 | 21.84 | 3.60E+08 | 9 | |||||||||
| 17 | |||||||||||||||||||||||||
| 18 | |||||||||||||||||||||||||
| 19 | |||||||||||||||||||||||||
| 20 | |||||||||||||||||||||||||
| 21 | |||||||||||||||||||||||||
| 22 | |||||||||||||||||||||||||
| 23 | |||||||||||||||||||||||||
| 24 | |||||||||||||||||||||||||
| 25 | |||||||||||||||||||||||||
| 26 | |||||||||||||||||||||||||
| 27 | |||||||||||||||||||||||||
| 28 | |||||||||||||||||||||||||
| 29 | |||||||||||||||||||||||||
| 30 | |||||||||||||||||||||||||
| 31 | |||||||||||||||||||||||||
| 32 | |||||||||||||||||||||||||
| 33 | |||||||||||||||||||||||||
| 34 | |||||||||||||||||||||||||
| 35 | |||||||||||||||||||||||||
| 36 | |||||||||||||||||||||||||
| 37 | |||||||||||||||||||||||||
| 38 | |||||||||||||||||||||||||
| 39 | |||||||||||||||||||||||||
| 40 | |||||||||||||||||||||||||
| 41 | |||||||||||||||||||||||||
| 42 | |||||||||||||||||||||||||
| 43 | |||||||||||||||||||||||||
| 44 | |||||||||||||||||||||||||
| 45 | |||||||||||||||||||||||||
| 46 | |||||||||||||||||||||||||
| 47 | |||||||||||||||||||||||||
| 48 | |||||||||||||||||||||||||
| 49 | |||||||||||||||||||||||||
| 50 | |||||||||||||||||||||||||
| 51 | |||||||||||||||||||||||||
| 52 | |||||||||||||||||||||||||
| 53 | |||||||||||||||||||||||||
| 54 | |||||||||||||||||||||||||
| 55 | |||||||||||||||||||||||||
| 56 | |||||||||||||||||||||||||
| 57 | |||||||||||||||||||||||||
| 58 | |||||||||||||||||||||||||
| 59 | |||||||||||||||||||||||||
| 60 | |||||||||||||||||||||||||
| 61 | |||||||||||||||||||||||||
| 62 | |||||||||||||||||||||||||
| 63 | |||||||||||||||||||||||||
| 64 | |||||||||||||||||||||||||
| 65 | |||||||||||||||||||||||||
| 66 | |||||||||||||||||||||||||
| 67 | |||||||||||||||||||||||||
| 68 | |||||||||||||||||||||||||
| 69 | |||||||||||||||||||||||||
| 70 | |||||||||||||||||||||||||
| 71 | |||||||||||||||||||||||||
| 72 | |||||||||||||||||||||||||
| 73 | |||||||||||||||||||||||||
| 74 | |||||||||||||||||||||||||
| 75 | |||||||||||||||||||||||||
| 76 | |||||||||||||||||||||||||
| 77 | |||||||||||||||||||||||||
| 78 | |||||||||||||||||||||||||
| 79 | |||||||||||||||||||||||||
| 80 | |||||||||||||||||||||||||
| 81 | |||||||||||||||||||||||||
| 82 | |||||||||||||||||||||||||
| 83 | |||||||||||||||||||||||||
| 84 | |||||||||||||||||||||||||
| 85 | |||||||||||||||||||||||||
| 86 | |||||||||||||||||||||||||
| 87 | |||||||||||||||||||||||||
| 88 | |||||||||||||||||||||||||
| 89 | |||||||||||||||||||||||||
| 90 | |||||||||||||||||||||||||
| 91 | |||||||||||||||||||||||||
| 92 | |||||||||||||||||||||||||
| 93 | |||||||||||||||||||||||||
| 94 | |||||||||||||||||||||||||
| 95 | |||||||||||||||||||||||||
| 96 | |||||||||||||||||||||||||
| 97 | |||||||||||||||||||||||||
| 98 | |||||||||||||||||||||||||
| 99 | |||||||||||||||||||||||||
| 100 |