A | B | C | D | E | F | G | |
---|---|---|---|---|---|---|---|
1 | Content | Video title | Video publish time | Duration | Estimated revenue (USD) | Estimated monetized playbacks | Playback-based CPM (USD) |
2 | Total | 1122.177 | 84860 | 14.769 | |||
3 | YeXHQts3xYM | Joscha Bach—Is AI Risk Real? | Sep 8, 2023 | 10470 | 381.039 | 18857 | 21.917 |
4 | tpcA5T5QS30 | 2024: The Year Of Artificial General Intelligence | Mar 11, 2024 | 763 | 131.032 | 15748 | 12.368 |
5 | JQ8zhrsLxhI | The Battle For The Future Of AI — Full Documentary | May 2, 2025 | 1842 | 88.047 | 10351 | 12.585 |
6 | FO3DU9_E93o | Ex Google Ceo Thinks Programmers Will Be Replaced By AI In 1 Year @SCSPAI #ai #agi #aiautomation | Aug 20, 2025 | 166 | 75.397 | ||
7 | 9s3XctQOgew | Curtis Huebner—AGI by 2028, 90% Doom | Jul 16, 2023 | 5399 | 33.686 | 3082 | 12.126 |
8 | DyZye1GZtfk | Robert Miles–Existential Risk from AI | Aug 19, 2022 | 10277 | 32.722 | 2736 | 12.533 |
9 | VItz2oEq5pA | Dylan Patel—GPU Shortage, Nvidia, Semiconductor Supply Chain | Aug 9, 2023 | 743 | 22.272 | 2374 | 15.86 |
10 | 6m8gk8d9P_k | The Trillion-Dollar AI Race Against the CCP | Jun 5, 2024 | 2013 | 19.211 | 3104 | 10.026 |
11 | cVBGjhN4-1g | Neel Nanda–Mechanistic Interpretability, Superposition, Grokking | Sep 21, 2023 | 7531 | 17.048 | 2035 | 11.358 |
12 | dkH50kZjRa4 | Ex Google CEO Says Powerful AIs Will Disrupt The World Order @sri #ai #agi #airisk #aisafety | Aug 26, 2025 | 126 | 16.801 | ||
13 | eb2oLHblrHU | Owain Evans - AI Situational Awareness, LLM Out-of-Context Reasoning | Aug 23, 2024 | 8147 | 16.781 | 1266 | 17.394 |
14 | -HpAJV2wvS4 | When AI Take All Human Jobs You Cannot Retrain @TheDiaryOfACEO #ai #agi #automation #aiautomation | Sep 4, 2025 | 70 | 16.62 | ||
15 | HAxd8DoZaW4 | Anthropic Solved Interpretability? | Oct 7, 2023 | 709 | 15.003 | 1905 | 12.707 |
16 | 9YHdmnaj-vo | AIs Will Be Writing Essentially All Of The Code In 12 Months @cfr #ai #agi #aiautomation #aicoding | Sep 1, 2025 | 20 | 12.829 | ||
17 | WnFx9jm68lY | The Economics of AGI Automation | Jun 3, 2024 | 1113 | 9.553 | 1558 | 10.039 |
18 | Oz4G9zrlAGs | Connor Leahy–EleutherAI, Conjecture | Jul 22, 2022 | 10640 | 8.935 | 783 | 9.549 |
19 | XSQ495wpWXs | Collin Burns–Making GPT-N Honest | Jan 17, 2023 | 9310 | 7.845 | 972 | 13.298 |
20 | 8Nyn3_ZWa_U | Anthropic Solved Interpretability Again? (Walkthrough) | May 23, 2024 | 1883 | 7.811 | 915 | 12.979 |
21 | jdnOVaouZMY | Ex Google CEO Says Le'ts Not Screw Up Superintelligence @TED #ai #agi #superintelligence #airisk | Aug 21, 2025 | 20 | 7.194 | ||
22 | K8SUBNPAJnE | Paul Christiano's Views on AI Doom (ft. Robert Miles) | Sep 29, 2023 | 294 | 7.113 | 955 | 12.302 |
23 | _ANvfMblakQ | We Beat The Strongest Go AI | Oct 4, 2023 | 1139 | 7.091 | 1154 | 10.112 |
24 | 7f8At1hNlYs | 2040: The Year of Full AI Automation | May 31, 2024 | 1036 | 6.943 | 469 | 25.068 |
25 | XDtDljh44DM | Ethan Perez (Anthropic) - Bottom-Up Alignment Research | Apr 9, 2024 | 2200 | 6.843 | 410 | 27.9 |
26 | suYeQhzbXOo | Sleeper Agents Explained - Part 1 - Safety Training | May 18, 2024 | 212 | 5.764 | 648 | 15.903 |
27 | hvmiNoSozLg | I Talked To GPT-4o (Not Multimodal) Via Voice - Day 3 | May 14, 2024 | 486 | 5.146 | 771 | 10.956 |
28 | tST2qIyWAqs | Ex Google CEO Says Scaling Laws Are Not Stopping @TheDiaryOfACEO #ai #agi #cybersecurity #chatgpt | Sep 3, 2025 | 67 | 4.831 | ||
29 | Xy4X0SX1QGo | Ex Google CEO Thinks The US Needs To Get To Superintelligence Before China @energyandcommerce #ai | Aug 28, 2025 | 91 | 4.721 | ||
30 | A4U6Ox0DxbE | Keerthana Gopalakrishnan—Robotics Transformer, Mother of Robots | Oct 5, 2023 | 1051 | 4.645 | 720 | 10.351 |
31 | QrsoaMJd9sA | I Talked To AI Therapists Everyday For A Week | May 15, 2024 | 1275 | 4.518 | 588 | 12.862 |
32 | 4SSbQTEE8es | Hailey Schoelkopf—Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling | Aug 7, 2023 | 498 | 4.309 | 531 | 14.373 |
33 | voTXdamMHMo | David Bau—Editing Facts in GPT, Interpretability | Aug 1, 2023 | 1494 | 4.29 | 424 | 16.962 |
34 | S7o2Rb37dV8 | Evan Hubinger (Anthropic)—Deception, Sleeper Agents, Responsible Scaling | Feb 12, 2024 | 3152 | 4.22 | 278 | 23.597 |
35 | SV87S38M1J4 | Ethan Caballero–Broken Neural Scaling Laws | Nov 3, 2022 | 1428 | 4.104 | 581 | 11.757 |
36 | _y9j2BoHg2c | Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training (Walkthrough) | May 25, 2024 | 1026 | 3.839 | 674 | 9.188 |
37 | kbA17ZUIJg8 | Adam Gleave - Vulnerabilities in GPT-4 APIs & Superhuman Go AIs | May 17, 2024 | 8169 | 3.324 | 264 | 18.045 |
38 | 5RyttfXTKfs | Holly Elmore—Pausing Frontier AI Development | Jan 22, 2024 | 6017 | 3.191 | 252 | 19.893 |
39 | 40nXnZnocnA | Brian Zhan—Investing In AI Applications | Oct 5, 2023 | 1011 | 2.981 | 262 | 19.244 |
40 | K34AwhoQhb8 | Robert Long–Artificial Sentience, Digital Minds | Aug 28, 2022 | 6404 | 2.903 | 213 | 21.066 |
41 | bDMqo7BpNbk | David Krueger—Coordination, AI Alignment, Academia | Jan 7, 2023 | 9920 | 2.671 | 283 | 16.018 |
42 | -Mzfru1r_5s | Christoph Schuhmann–Open Source AI, Misuse, Existential risk | May 1, 2023 | 1871 | 2.666 | 306 | 15.271 |
43 | 5kLtKNKejhU | Using An AI Therapist For 7 Days - Day 1 | May 12, 2024 | 178 | 2.606 | 335 | 13.299 |
44 | BtHMIQs_5Nw | Eric Michaud—Scaling, Grokking, Quantum Interpretability | Jul 12, 2023 | 2903 | 2.592 | 306 | 13.944 |
45 | tVXOp-qC5aU | AI Control: Humanity's Final Line Of Defense (Walkthrough) | May 27, 2024 | 863 | 2.56 | 440 | 9.382 |
46 | hAyDaBebkoc | AGI Takeoff By 2036 | Jun 1, 2024 | 1091 | 2.462 | 435 | 8.255 |
47 | qIHS7w1h6Jc | Coping with AI Doom | Jun 4, 2024 | 1533 | 2.28 | 393 | 7.71 |
48 | KR2_ulkzrd0 | How to Catch an AI Liar | Oct 13, 2023 | 752 | 2.236 | 452 | 7.644 |
49 | 6ngasL054wM | Nina Rimsky—AI Deception, Mesa-optimisation | Jul 18, 2023 | 3344 | 2.169 | 199 | 18.121 |
50 | TjWiaUMMh6g | Ethan Perez–Inverse Scaling, Red Teaming | Aug 24, 2022 | 7287 | 2.14 | 194 | 15.835 |
51 | UPlv-lFWITI | Ethan Caballero–Scale Is All You Need | May 5, 2022 | 3114 | 2.133 | 282 | 11.848 |
52 | mHPVuwJjOxo | Living through the singularity is different from reasoning through it @GoogleDevelopers #ai #agi | Sep 2, 2025 | 174 | 2.12 | ||
53 | -cYdGfxtGag | Anthropic Caught Their Backdoored Models (Walkthrough) | May 25, 2024 | 1292 | 2.089 | 249 | 12.526 |
54 | g0phel3d1Po | What Happens When AI Systems Are Better Than Humans At Almost Everything @WSJNews #ai #agi | Aug 27, 2025 | 114 | 2.049 | ||
55 | bVux6H9TUVQ | Andi Peng—A Human-in-the-Loop Framework for Test-Time Policy Adaptation | Aug 8, 2023 | 640 | 1.999 | 211 | 16.839 |
56 | ZwvJn4x714s | Irina Rish—AGI, Scaling, Alignment | Oct 18, 2022 | 5190 | 1.881 | 237 | 12.46 |
57 | K3FEMTWNwu4 | Using An AI Therapist For 7 Days - Day 2 | May 13, 2024 | 296 | 1.845 | 207 | 15.585 |
58 | odlQa6AE1gY | Tim Dettmers—k-bit Inference Scaling Laws | Aug 7, 2023 | 406 | 1.756 | 260 | 11.7 |
59 | ZpwSNiLV-nw | Victoria Krakovna–AGI Ruin, Sharp Left Turn, Paradigms of AI Alignment | Jan 12, 2023 | 6747 | 1.748 | 137 | 11.628 |
60 | JXYcLQItZsk | Clarifying and predicting AGI | May 9, 2023 | 265 | 1.687 | 323 | 8.854 |
61 | 713KyknwShA | Jesse Hoogland–AI Risk, Interpretability | Jul 6, 2023 | 2592 | 1.567 | 168 | 14.97 |
62 | QE5RAlMPjko | Nathan Labenz - AI Biology Could Spiral Out Of Control | May 16, 2024 | 295 | 1.54 | 228 | 11.491 |
63 | z6Z2ghdBqzQ | Superintelligence Has To Keep Everybody Up @askcatgpt #ai #agi #superintelligence #microsoftai | Aug 29, 2025 | 175 | 1.491 | ||
64 | CL_F6UWWtXo | ChatGPT Is Not Private @TheoVon #ai #agi #superintelligence #chatgpt #aiautomation | Jul 29, 2025 | 36 | 1.446 | ||
65 | weRoJ8KN2f0 | Vincent Weisser–Funding Alignment Research | Jul 24, 2023 | 1088 | 1.445 | 84 | 30.857 |
66 | 3T7Gpwhtc6Q | Shahar Avin–AI Governance | Sep 23, 2022 | 7481 | 1.432 | 64 | 38.313 |
67 | qeVyAzWrB9Q | When Machines Do All The Work Human Civilization Ends @business #ai #agi #aiautomation #airisk | Sep 1, 2025 | 58 | 1.409 | ||
68 | DD303irN3ps | Markus Anderljung–Regulating Advanced AI | Sep 9, 2022 | 6186 | 1.367 | 78 | 30.526 |
69 | 8Q7NgqrXVbA | The Inside View #8–Sonia Joseph–NFTs, Web3 and AI Safety | Dec 22, 2021 | 5136 | 1.352 | 171 | 13.924 |
70 | IsVtnIpOcTM | AI Will Become Better Than Humans At All Jobs @TheoriesofEverything #ai #aiautomation #automation | Sep 9, 2025 | 16 | 1.349 | ||
71 | Bbf4ctzYDZs | AIs Could Self Improve During A Hard Takeoff @WIRED #ai #agi #superintelligence #deepmind #google | Aug 24, 2025 | 19 | 1.319 | ||
72 | lR6-w3v3pxQ | Aran Komatsuzaki–Scaling, GPT-J | Jul 19, 2023 | 4642 | 1.276 | 173 | 12.827 |
73 | Bo6jO7MIsIU | Breandan Considine–AI Timelines, Coding AI, Neuro Symbolic AI | May 4, 2023 | 6254 | 1.256 | 85 | 24.059 |
74 | GYuLD-_5gIU | Sleeper Agents Explained - Part 2 - Deceptive Instrumental Alignment, Model Poisoning | May 19, 2024 | 268 | 1.222 | 162 | 12.846 |
75 | xSoc15PohUs | AGI Won't Happen In A Particular Day @TheDiaryOfACEO #ai #agi #superintelligence #chatgpt #podcast | Sep 5, 2025 | 47 | 1.203 | ||
76 | Tip1Ztjd-so | Tony Wang—Beating Superhuman Go AIs | Aug 4, 2023 | 1438 | 1.194 | 236 | 8.22 |
77 | rSw3UVDZge0 | Katja Grace—Slowing Down AI, Forecasting AI Risk | Sep 16, 2022 | 6075 | 1.193 | 126 | 16.111 |
78 | 08oMG8KD3nM | The Inside View #6—Slava Bobrov—Brain Computer Interfaces | Oct 7, 2021 | 5977 | 1.163 | 109 | 18.578 |
79 | 0Q37sDfyAr8 | What If AI Gets So Smart That The US President Needs ChatGPT-7's recommendation #ai #agi #samaltman | Jul 27, 2025 | 11 | 1.163 | ||
80 | kWsHS7tXjSU | Blake Richards—AGI Does Not Exist | Jun 14, 2022 | 4532 | 1.149 | 171 | 10.76 |
81 | vW_HPphW91c | Eric Wallace—Poisoning Language Models During Instruction Tuning | Aug 7, 2023 | 114 | 1.083 | 187 | 10.406 |
82 | LzvFw_5lp1A | AI Is The Next Manhattan Project @ycombinator #ai #agi #superintelligence #airace | Aug 30, 2025 | 33 | 1.068 | ||
83 | OR-vcVNXdKk | GPT-2 Teaches GPT-4: Weak-to-Strong Generalization | Jan 3, 2024 | 441 | 1.057 | 187 | 9.508 |
84 | bhE5Zs3Y1n8 | Erik Jones—Automatically Auditing Large Language Models | Aug 11, 2023 | 1357 | 1.035 | 124 | 14.145 |
85 | caeWRNZtqMo | There is Word For What We're Doing With AI - This is Insane @TED #ai #agi #airisk #aisafety | Aug 22, 2025 | 180 | 0.989 | ||
86 | lNscZfTp3kQ | How to Justify the Safety of Advanced AI Systems? (Walkthrough) | May 29, 2024 | 2030 | 0.985 | 100 | 10.29 |
87 | YQpFe1bVR9k | If Anyone Builds It Everyone Dies @robinsonerhardt #ai #agi #superintelligence #airisk #aisafety | Aug 16, 2025 | 7 | 0.979 | ||
88 | LFeiBIDuvP8 | Simeon Campos–Short Timelines, AI Governance, Field Building | Apr 29, 2023 | 7457 | 0.966 | 137 | 11.044 |
89 | r8ibxJEleVI | Tomek Korbak—Pretraining Language Models with Human Preferences | Aug 7, 2023 | 368 | 0.901 | 134 | 11.866 |
90 | iOSbw4MrwMc | Sleeper Agents Explained - Part 3 - Chain-of-Thought Backdoors | May 20, 2024 | 229 | 0.803 | 104 | 12.913 |
91 | 7xzbhSmzgTk | AIs Will Be One Billion More Times Efficient @CBSMornings #ai #agi #airisk #superintelligence | Aug 14, 2025 | 23 | 0.763 | ||
92 | Ezhr8k96BA8 | Scale Is All You Need: Change My Mind | Nov 18, 2022 | 739 | 0.75 | 104 | 11.644 |
93 | yjPuMxDD_hw | Emil Wallner—Sora, Text-to-video, AGI optimism | Feb 20, 2024 | 6169 | 0.721 | 57 | 21.842 |
94 | HAFoIRNiKYE | The Inside View #2–Connor Leahy | May 4, 2021 | 5327 | 0.708 | 116 | 6.69 |
95 | oF10k85-7QE | AI Recursive Self Improvement Is The Red Line Where Danger Starts #ai #agi #aipolicy #uspolitics | Jul 6, 2025 | 13 | 0.697 | ||
96 | 2FkWcXTj-HM | Google DeepMind CEO Worries About AGI Risks @WIRED #ai #agi #airisk #aisafety #superintelligence | Aug 25, 2025 | 38 | 0.682 | ||
97 | 38xQ3u6EDpw | Jeffrey Ladish–AI Cyberwarfare, Compute Monitoring | Jan 27, 2024 | 1997 | 0.68 | 49 | 24.816 |
98 | E6TA66tpBFI | Paul Christiano AI, Zelda Treachous Turn, Quantilizers (Walkthrough) | May 28, 2024 | 1577 | 0.669 | 98 | 10.204 |
99 | MjkSETpoFlY | Alexander Pan–Are AIs Machiavellian? | Jul 26, 2023 | 1211 | 0.662 | 72 | 14.111 |
100 | vsqWLqASbxY | We Can't Control Superintelligence Indefinitely @joerogan #ai #agi #superintelligence #aicontrol | Jul 21, 2025 | 7 | 0.642 |