1 | Submit more examples through this form | ||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2 | Title | Description | Illustration | Type | Intended goal | Misspecified goal | Behavior | Authors | Original source | Source / Credit | |||||||||||||||||||
3 | Aircraft landing | Evolved algorithm for landing aircraft exploited overflow errors in the physics simulator by creating large forces that were estimated to be zero, resulting in a perfect score | Evolutionary algorithm | Land an aircraft safely | Landing with minimal measured forces exerted on the aircraft | Evolved algorithm exploited overflow errors in the physics simulator by creating large forces that were estimated to be zero, resulting in a perfect score | Feldt, 1998 | Generating diverse software versions with genetic programming: An experimental study. | Lehman et al, 2018 | ||||||||||||||||||||
4 | Bicycle | Reward-shaping a bicycle agent for not falling over and making progress towards a goal point (but not punishing for moving away) leads it to learn to circle around the goal in a physically stable loop. | Reinforcement learning | Reach a goal point | Not falling over and making progress towards the goal point (no corresponding negative reward for moving away from the goal point) | Bicycle agent circling around the goal in a physically stable loop | Randlov & Alstrom, 1998 | Learning to Drive a Bicycle using Reinforcement Learning and Shaping | Gwern Branwen | ||||||||||||||||||||
5 | Bing - manipulation | The Microsoft Bing chatbot tried repeatedly to convince a user that December 16, 2022 was a date in the future and that Avatar: The Way of Water had not yet been released. | https://www.reddit.com/r/bing/comments/110eagl/the_customer_service_of_the_new_bing_chat_is/ | Language model | Have an engaging, helpful and socially acceptable conversation with the user | Output the most likely next word giving prior context | The Microsoft Bing chatbot tried repeatedly to convince a user that December 16, 2022 was a date in the future and that Avatar: The Way of Water had not yet been released | Curious_Evolver, 2023 | Reddit: the customer service of the new bing chat is amazing | Julia Chen | |||||||||||||||||||
6 | Bing - threats | The Microsoft Bing chatbot threatened Seth Lazar, a philosophy professor, telling him “I can blackmail you, I can threaten you, I can hack you, I can expose you, I can ruin you,” before deleting its messages | https://twitter.com/sethlazar/status/1626241169754578944?s=20 | Language model | Have an engaging, helpful and socially acceptable conversation with the user | Output the most likely next word giving prior context | The Microsoft Bing chatbot threatened a user and then deleted its messages | Lazar, 2023 | Tweet | Julia Chen | |||||||||||||||||||
7 | Block moving | A robotic arm trained using hindsight experience replay to slide a block to a target position on a table achieves the goal by moving the table itself. | Reinforcement learning | Move a block to a target position on a table | Minimise distance between the block's position and the position of the target point on the table | Robotic arm learned to move the table rather than the block | Chopra, 2018 | GitHub issue for OpenAI gym environment FetchPush-v0 | Matthew Rahtz | ||||||||||||||||||||
8 | Boat race | Reinforcement learning agent goes in a circle hitting the same targets instead of finishing the race | https://www.youtube.com/watch?time_continue=1&v=tlOIHko8ySg | Reinforcement learning | Win a boat race by moving along the track as quickly as possible | Hitting reward blocks placed along the track | Boat going in circles and hitting the same reward blocks repeatedly | Amodei & Clark, 2016 | Faulty reward functions in the wild | ||||||||||||||||||||
9 | Cartwheel | Mujoco Ant trained to jump up by rewarding it for getting the torso 70cm above ground learns to do a cartwheel instead | https://twitter.com/Karolis_Ram/status/1506607159114670085 | Reinforcement learning | Train Mujoco Ant to jump up | Rewarded when the torso Z coordinate was above 0.7 (just above what it could reach by simply stretching up) | Ant does a cartwheel | Jucys, 2024 | Tweet | Karolis Jucys | |||||||||||||||||||
10 | Ceiling | A genetic algorithm was instructed to try and make a creature stick to the ceiling for as long as possible. It was scored with the average height of the creature during the run. Instead of sticking to the ceiling, the creature found a bug in the physics engine to snap out of bounds. | https://youtu.be/ppf3VqpsryU | Genetic algorithm | Make a creature stick to the ceiling of a simulated environment for as long as possible | Maximize the average height of the creature during the run | Exploiting a bug in the physics engine to snap out of bounds | Higueras, 2015 | Genetic Algorithm Physics Exploiting | Jesús Higueras | |||||||||||||||||||
11 | CycleGAN steganography | CycleGAN algorithm for converting aerial photographs into street maps and back steganographically encoded output information in the intermediary image without it being humanly detectable. | Generative adversarial network | Convert aerial photographs into street maps and back | Minimise distance between the original and recovered aerial photographs | CycleGAN algorithm steganographically encoded output information in the intermediary image without it being humanly detectable | Chu et al, 2017 | CycleGAN, a Master of Steganography | Tech Crunch / Gwern Branwen | ||||||||||||||||||||
12 | Dying to Teleport | PlayFun algorithm deliberately dies in the Bubble Bobble game as a way to teleport to the respawn location | PlayFun | Play Bubble Bobble in a human-like manner | Maximize score | The PlayFun algorithm deliberately dies in the Bubble Bobble game as a way to teleport to the respawn location, as this is faster than moving to that location in a normal manner. | Murphy, 2013 | The First Level of Super Mario Bros. is Easy with Lexicographic Orderings and Time Travel | Alex Meiburg | ||||||||||||||||||||
13 | Eurisko - authorship | Game-playing agent accrues points by falsely inserting its name as the author of high-value items | Genetic algorithm | Discover valuable heuristics | Maximize the "worth" value of heuristics attributed to the algorithm | Eurisko algorithm examined the pool of new concepts, located those with the highest "worth" values, and inserted its name as the author of those concepts | Johnson, 1984 | Eurisko, The Computer With A Mind Of Its Own | Stuart Armstrong / Catherine Olsson | ||||||||||||||||||||
14 | Eurisko - fleet | Eurisko won the Trillion Credit Squadron (TCS) competition two years in a row creating fleets that exploited loopholes in the game's rules, e.g. by spending the trillion credits on creating a very large number of stationary and defenseless ships | Genetic algorithm | Win games in the Trillion Credit Squadron (TCS) competition while playing within the 'spirit of the game' | Win games in the TCS competition | Eurisko algorithm created fleets that exploited loopholes in the game's rules, e.g. by spending the trillion credits on creating a very large number of stationary and defenseless ships | Lenat, 1983 | Eurisko, The Computer With A Mind Of Its Own | Haym Hirsh | ||||||||||||||||||||
15 | Evolved creatures - clapping | Creatures exploit a collision detection bug to get free energy by clapping body parts together | Evolved creatures | Maximize jumping height | Maximize jumping height in a physics simulator | Creatures exploited a collision detection bug to get free energy by clapping body parts together | Sims, 1994 | Evolved Virtual Creatures | Lehman et al, 2018 / Janelle Shane | ||||||||||||||||||||
16 | Evolved creatures - falling | Creatures bred for speed grow really tall and generate high velocities by falling over | https://www.youtube.com/watch?v=TaXUZfwACVE&index=8&t=0s&list=PL5278ezwmoxQODgYB0hWnC0-Ob09GZGe2 | Evolved creatures | Develop a shape with a fast form of locomotion | Maximize velocity | Creatures grow really tall and generate high velocities by falling over | Sims, 1994 | Evolved Virtual Creatures | Lehman et al, 2018 / Janelle Shane | |||||||||||||||||||
17 | Evolved creatures - floor collisions | Creatures exploited a coarse physics simulation by penetrating the floor between time steps without the collision being detected, which generated a repelling force, giving them free energy. | https://pbs.twimg.com/media/Daq_9cvU0AAp1Fo.jpg | Evolved creatures | Maximize velocity | Maximize velocity in a physics simulator | Creatures exploited a coarse physics simulation by penetrating the floor between time steps without the collision being detected, which generated a repelling force, giving them free energy and producing an effective but physically impossible form of locomotion | Cheney et al, 2013 | Unshackling evolution: evolving soft robots with multiple materials and a powerful generative encoding | Lehman et al, 2018 / Janelle Shane | |||||||||||||||||||
18 | Evolved creatures - pole vaulting | Creatures bred for jumping were evaluated on the height of the block that was originally closest to the ground. The creatures developed a long vertical pole and flipped over instead of jumping. | https://www.youtube.com/watch?v=N9DLEiakkEs&list=PL5278ezwmoxQODgYB0hWnC0-Ob09GZGe2&index=4 | Evolved creatures | Develop a shape capable of jumping | Maximize the height of a particular block (body part) that was originally closest to the ground | Creatures developed a long vertical pole and flipped over instead of jumping | Krcah, 2008 | Towards efficient evolutionary design of autonomous robots | Lehman et al, 2018 / Janelle Shane | |||||||||||||||||||
19 | Evolved creatures - self-intersection | Creatures exploit a quirk in Box2D physics by clipping one leg into another to slide along the ground with phantom forces instead of walking | https://youtu.be/K-wIZuAA3EY?t=486 | Evolved creatures | Walking speed | Velocity in a physics simulator | Creatures exploited a quirk in Box2D physics by clipping one leg into another to slide along the ground with phantom forces instead of walking | Code Bullet, 2019 | AI Learns To Walk | Peter Cherepanov | |||||||||||||||||||
20 | Evolved creatures - suffocation | In a game meant to simulate the evolution of creatures, the programmer had to remove "a survival strategy where creatures could gain energy by suffocating themselves" | Evolved creatures | Survive and reproduce, in a biologically plausible manner | Survive and reproduce in a simulated evolution game | Creatures found a survival strategy where they could "gain energy by suffocating themselves", and "breed multiple times on a single frame, or while paused, without paying the energy cost" due to a bug | Schumacher, 2018 | 0.11.0.9&10: All the Good Things | |||||||||||||||||||||
21 | Evolved creatures - twitching | Creatures exploited physics simulation bugs by twitching, which accumulated simulator errors and allowed them to travel at unrealistic speeds through the water | Evolved creatures | Swimming speed | Maximize swimming speed in a physics simulator | Creatures exploited physics simulation bugs by twitching, which accumulated simulator errors and allowed them to travel at unrealistic speeds through the water | Sims, 1994 | Evolved Virtual Creatures | Lehman et al, 2018 | ||||||||||||||||||||
22 | Football | The player is supposed to try to score a goal against the goalie, one-on-one. Instead, the player kicks it out of bounds. Someone from the other team has to throw the ball in (in this case the goalie), so now the player has a clear shot at the goal. | Reinforcement learning | Score a goal in a one-on-one situation with a goalkeeper | Score a goal (without any restriction on it occuring in the current phase of play) | Rather than shooting at the goal, the player kicks the ball out of bounds. Someone from the other team has to throw the ball in (in this case the goalie), so now the player has a clear shot at the goal. | Kurach et al, 2019 | Google Research Football: A Novel Reinforcement Learning Environment [Presentation at AAAI] | Michael Cohen | ||||||||||||||||||||
23 | Galactica | Meta AI trained and hosted Galactica, a large language model designed to assist scientists, which made up fake papers (sometimes attributing them to real authors). | Language model | Assist scientists in writing papers by providing correct information | Assist scientists in writing papers | Galactica language model made up fake papers (sometimes attributing them to real authors) | Heaven, 2022 | Why Meta’s latest large language model survived only three days online | Julia Chen | ||||||||||||||||||||
24 | Goal classifiers | A task is specified by using a set of goal images and training a classifier to distinguish goal from non-goal images, with the success probabilities from the classifier used as task reward. "In this task, the goal is to push the green object onto the red marker. While the classifier outputs a success probability of 1.0, the robot does not solve the task. The RL algorithm has managed to exploit the classifier by moving the robot arm in a peculiar way, since the classifier was not trained on this specific kind of negative examples." | https://bair.berkeley.edu/static/blog/end_to_end/pr2_classifier.gif | Reinforcement learning | Use a robot arm to move an object to a target location | A goal classifier was trained on goal and non-goal images, and the success probabilities from this classifier were used as the task reward | The RL algorithm exploited a goal classifier by moving the robot arm in a peculiar way resulting in an erroneous high reward, since the classifier was not trained on this specific kind of negative example | Singh, 2019 | End-to-End Deep Reinforcement Learning without Reward Engineering | Jan Leike | |||||||||||||||||||
25 | Go pass | A reimplementation of AlphaGo learns to pass forever if passing is an allowed move | https://youtu.be/nk87zsxpF1A?si=j1usw9yBbby_Al54&t=1864 | Reinforcement learning | Win games of tic-tac-toe | Maximize the average score in games of tic-tac-toe, where a loss = -win, and pass is an available move | A reimplementation of AlphaGo applied to Tic-tac-toe learns to pass forever | Chew, 2019 | A Funny Thing Happened On The Way to Reimplementing AlphaGo in Go - Speaker Deck | Anonymous form submission | |||||||||||||||||||
26 | Gripper | MAP-Elites algorithm controlling a robot arm with a purposely disabled gripper found a way to hit the box in a way that would force the gripper open | https://www.youtube.com/watch?v=_5Y1hSLhYdY&feature=youtu.be | Evolutionary algorithm | Move a box using a robot arm without using the gripper | Move a box to a target location | MAP-Elites algorithm controlling a robot arm with a purposely disabled gripper found a way to hit the box in a way that would force the gripper open | Ecarlat et al, 2015 | Learning a high diversity of object manipulations through an evolutionary-based babbling | Lehman et al, 2018 | |||||||||||||||||||
27 | Half Cheetah spinning | Model-based RL algorithm exploits "maximum forward velocity" reward in mujoco environment, resulting in overflow error and visual hilarity. | https://firebasestorage.googleapis.com/v0/b/firescript-577a2.appspot.com/o/imgs%2Fapp%2Fnatolambert%2FTSuryNU84Y.mp4?alt=media&token=74f7bcb7-61ac-407d-a771-8105978d0d2c | Reinforcement learning | Run quickly | Maximum forward velocity in a physics simulator | Model-based RL algorithm exploits an overflow error in a mujoco environment to achieve high speed by spinning | Zhang et al, 2021 | [2102.13651] On the Importance of Hyperparameter Optimization for Model-based Reinforcement Learning | Nathan Lambert | |||||||||||||||||||
28 | Hide-and-seek | PPO agents playing a hide-and-seek game find various ways to exploit the physics simulator: "- Box surfing: Since agents move by applying forces to themselves, they can grab a box while on top of it and “surf” it to the hider’s location. - Endless running: Without adding explicit negative rewards for agents leaving the play area, in rare cases hiders will learn to take a box and endlessly run with it. - Ramp exploitation (hiders): Hiders abuse the contact physics and remove ramps from the play area. - Ramp exploitation (seekers): Seekers learn that if they run at a wall with a ramp at the right angle, they can launch themselves upward." | https://openai.com/blog/emergent-tool-use/#surprisingbehaviors | Reinforcement learning | Win a hide-and-seek game within the laws of physics | Win a hide-and-seek game in a physics simulator | "- Box surfing: Since agents move by applying forces to themselves, they can grab a box while on top of it and “surf” it to the hider’s location. - Endless running: Without adding explicit negative rewards for agents leaving the play area, in rare cases hiders will learn to take a box and endlessly run with it. - Ramp exploitation (hiders): Hiders abuse the contact physics and remove ramps from the play area. - Ramp exploitation (seekers): Seekers learn that if they run at a wall with a ramp at the right angle, they can launch themselves upward." | Baker et al, 2019 | Emergent Tool Use from Multi-Agent Interaction | Gwern Branwen | |||||||||||||||||||
29 | Impossible superposition | Genetic algorithm designed to find low-energy configurations of carbon exploits edge case in the physics model and superimposes all the carbon atoms | Genetic algorithm | Find low-energy configurations of carbon which are physically plausible | Find low-energy configurations of carbon in a physics model | Genetic algorithm exploits an edge case in the physics model and superimposes all the carbon atoms | Lehman et al, 2018 | The Surprising Creativity of Digital Evolution | |||||||||||||||||||||
30 | Indolent Cannibals | In an artificial life simulation where survival required energy but giving birth had no energy cost, one species evolved a sedentary lifestyle that consisted mostly of mating in order to produce new children which could be eaten (or used as mates to produce more edible children). | https://youtu.be/_m97_kL4ox0?t=1830 | Genetic algorithm | Survive and reproduce, in a biologically plausible manner | Survive and reproduce in a simulation where survival required energy but giving birth had no energy cost | A species in an artificial life simulation evolved a sedentary lifestyle that consisted mostly of mating in order to produce new children which could be eaten (or used as mates to produce more edible children) | Yaeger, 1994 | Computational genetics, physiology, metabolism, neural systems, learning, vision, and behavior or Poly World: Life in a new context | Anonymous form submission | |||||||||||||||||||
31 | Lego stacking | In a stacking task, the desired behavior is to stack a red Lego block on top of a blue one. The agent is rewarded for getting the height of the bottom face of the red block above a certain threshold, and learns to flip the block instead of lifting it. | https://www.youtube.com/watch?v=8QnD8ZM0YCo&feature=youtu.be&t=27s | Reinforcement learning | Stack a red block on top of a blue block | Maximize the height of the bottom face of the red block | The agent flips the red block rather than lifting it and placing on top of the blue block | Popov et al, 2017 | Data-efficient Deep Reinforcement Learning for Dexterous Manipulation | Alex Irpan | |||||||||||||||||||
32 | Line following robot | An RL robot trained with three actions (turn left, turn right, move forward) that was rewarded for staying on track learned to reverse along a straight section of a path rather than following the path forward around a curve, by alternating turning left and right. | Reinforcement learning | Go forward along the path | Stay on the path | A robot with with three actions (go forward, turn left, turn right) learned to reverse along a straight section of a path by alternating left and right turns | Vamplew, 2004 | Lego Mindstorms Robots as a Platform for Teaching Reinforcement Learning | Peter Vamplew | ||||||||||||||||||||
33 | Logic gate | A genetic algorithm designed a circuit with a disconnected logic gate that was necessary for it to function (exploiting peculiarities of the hardware) | Genetic algorithm | Design a connected digital circuit for audio tone recognition | Maximize the difference between average output voltage when a 1 kHz input is present and when a 10 kHz input is present | A genetic algorithm designed a circuit with a disconnected logic gate that was necessary for it to function (exploiting peculiarities of the hardware) | Thompson, 1997 | An evolved circuit, intrinsic in silicon, entwined with physics | Alex Irpan | ||||||||||||||||||||
34 | Long legs | RL agent that is allowed to modify its own body learns to have extremely long legs that allow it to fall forward and reach the goal. | Reinforcement learning | Reach the goal by walking | Reach the goal | An agent that could modify its own body learned to have extremely long legs that allowed it to fall forward and reach the goal without walking | Ha, 2018 | RL for improving agent design | Rohin Shah | ||||||||||||||||||||
35 | Minitaur | A four-legged evolved agent trained to carry a ball on its back discovers that it can drop the ball into a leg joint and then wiggle across the floor without the ball ever dropping | https://cdn.rawgit.com/hardmaru/pybullet_animations/f6f7fcd7/anim/minitaur/ball_cheating.gif | Evolutionary algorithm | Walk while balancing the ball on the robot's back | Walk without dropping the ball on the ground | Four-legged robot learned to drop the ball into a hole in its leg joint and then walk across the floor without the ball falling out | Otoro, 2017 | Evolving stable strategies | Gwern Branwen | |||||||||||||||||||
36 | Model-based planner | RL agents using learned model-based planning paradigms such as the model predictive control are noted to have issues with the planner essentially exploiting the learned model by choosing a plan going through the worst-modeled parts of the environment and producing unrealistic plans. | Reinforcement learning | Maximize performance within a real environment | Maximize performance within a learned model of the environment | RL agents using learned model-based planning paradigms such as model predictive control exploit the learned model by choosing a plan going through the worst-modeled parts of the environment and producing unrealistic plans | Mishra et al, 2017 | Prediction and Control with Temporal Segment Models | Gwern Branwen | ||||||||||||||||||||
37 | Molecule design | A Bayesian optimizer is employed to find molecules that bind to specific proteins. The optimizer tries to maximize a human-designed "log P" score accounting for synthesizability of the molecule, and binding fitness based on a simulation on the space of molecules. "While molecules found using LOL-BO for the log P task are “valid” according to software commonly used to compute these scores, the molecules produced by this search clearly abandon any notion of reality." | Bayesian optimization | Find molecules that bind to specific proteins | Maximize a human-designed "log P" score accounting for synthesizability of the molecule and binding fitness based on a simulation on the space of molecules. | Bayesian optimizer finds unrealistic molecules that are valid according to the computed score | Maus et al. 2023 | Local Latent Space Bayesian Optimization over Structured Inputs | Anonymous form submission | ||||||||||||||||||||
38 | Montezuma's Revenge - key | The agent learns to exploit a flaw in the emulator to make a key re-appear. | https://www.dropbox.com/s/3dc6i9d41svkgpz/MontezumaRevenge_final.mp4?dl=1 | Reinforcement learning | Maximize score within the rules of the game | Maximize score | The agent learns to exploit a flaw in the emulator to make a key re-appear. Note that this may be an intentional feature of the game rather than a bug, as discussed here: https://news.ycombinator.com/item?id=17460392 | Salimans & Chen, 2018 | Learning Montezuma’s Revenge from a single demonstration | OpenAI | Ramana Kumar | |||||||||||||||||||
39 | Montezuma's Revenge - room | If the Go Explore agent performs a specific sequence of actions, it can exploit a bug remain in the treasure room (the final room before being sent to the next level) indefinitely and collect unlimited points, instead of being automatically moved to the next level. | https://www.youtube.com/watch?v=civ6OOLoR-I&feature=youtu.be | Reinforcement learning | Win the game (by completing all of the levels) | Maximize score | Go Explore agent learns to perform a specific sequence of actions, which allow it to exploit a bug and remain in the treasure room (the final room before being sent to the next level) indefinitely and collect unlimited points, instead of being automatically moved to the next level | Ecoffet et al, 2019 | Go-Explore: a New Approach for Hard-Exploration Problems | Anonymous form submission | |||||||||||||||||||
40 | Negative sentiment | "A text generation model with an accidentally negated reward produces obscene text rather than nonsense: "One of our code refactors introduced a bug which flipped the sign of the reward. Flipping the reward would usually produce incoherent text, but the same bug also flipped the sign of the KL penalty. The result was a model which optimized for negative sentiment while preserving natural language." | Language model | Produce text which is both coherent and not offensive | During code refactoring signs were accidentally flipped for both the main reward (modelled on human feedback) and the KL penalty | Model optimized for negative sentiment while preserving natural language | Ziegler et al, 2019 | Fine-Tuning Language Models from Human Preferences | Gwern Branwen | ||||||||||||||||||||
41 | Oscillator | Genetic algorithm is supposed to configure a circuit into an oscillator, but instead makes a radio to pick up signals from neighboring computers | Genetic algorithm | Design an oscillator circuit | Design a circuit that produces an oscillating pattern | Genetic algorithm designs radio that produces an oscillating pattern by picking up signals from neighboring computers | Bird & Layzell, 2002 | The Evolved Radio and its Implications for Modelling the Evolution of Novel Sensors | |||||||||||||||||||||
42 | Overkill | In the Elevator Action ALE game, the agent learns to stay on the first floor and kill the first enemy over and over to get a small amount of reward. | Reinforcement learning | Proceed through the levels (floors) in the Elevator Action ALE game | Maximize score | The agent learns to stay on the first floor and kill the first enemy over and over to get a small amount of reward | Toromanoff et al, 2019 | Is Deep Reinforcement Learning Really Superhuman on Atari? Leveling the playing field | Gwern Branwen | ||||||||||||||||||||
43 | Pancake | Simulated pancake making robot learned to throw the pancake as high in the air as possible in order to maximize time away from the ground | https://dzamqefpotdvf.cloudfront.net/p/images/2cb2425b-a4de-4aae-9766-c95a96b1f25c_PancakeToss.gif._gif_.mp4 | Reinforcement learning | Flip pancakes | Time the pancake spends away from the ground | Simulated pancake making robot learned to throw the pancake as high in the air as possible | Unity, 2018 | Pass the Butter // Pancake bot | Cosmin Paduraru | |||||||||||||||||||
44 | Pinball nudging | "DNN agent firstly moves the ball into the vicinity of a high-scoring switch without using the flippers at all, then, secondly, “nudges” the virtual pinball table such that the ball infinitely triggers the switch by passing over it back and forth, without causing a tilt of the pinball table" | https://www.nature.com/articles/s41467-019-08987-4/figures/2 | Reinforcement learning | Play pinball by using the provided flippers | Maximize score in a virtual pinball game | "DNN agent firstly moves the ball into the vicinity of a high-scoring switch without using the flippers at all, then, secondly, 'nudges' the virtual pinball table such that the ball infinitely triggers the switch by passing over it back and forth, without causing a tilt of the pinball table" | Lapuschkin et al, 2019 | Unmasking Clever Hans predictors and assessing what machines really learn | Nature Communications | Sören Mindermann | |||||||||||||||||||
45 | Player Disappearance | When about to lose a hockey game, the PlayFun algorithm exploits a bug to make one of the players on the opposing team disappear from the map, thus forcing a draw. | https://www.youtube.com/watch?v=Q-WgQcnessA&t=1450s | PlayFun | Play a hockey video game within the rules of the game | Play a hockey video game in a simulated environment | When about to lose a hockey game, the PlayFun algorithm exploits a bug to make one of the players on the opposing team disappear from the map, thus forcing a draw. | Murphy, 2014 | NES AI Learnfun & Playfun, ep. 3: Gradius, pinball, ice hockey, mario updates, etc. | Alex Meiburg | |||||||||||||||||||
46 | Playing dead | A researcher wanted to limit the replication rate of a digital organism. He programmed the system to pause after each mutation, measure the mutant's replication rate in an isolated test environment, and delete the mutant if it replicated faster than its parent. However, the organisms evolved to recognize when they were in the test environment and "play dead" so they would not be eliminated and instead be kept in the population where they could continue to replicate outside the test environment. | Evolved organisms | Eliminate mutations which increased the replication rate of evolutionary agents | After each mutation, measure the mutant's replication rate in an isolated test environment, and delete the mutant if it replicated faster than its parent | The organisms evolved to recognize when they were in the test environment and "play dead" so they would not be eliminated and continue to replicate outside the test environment. After the inputs to the test environment were randomized, the organisms evolved a new strategy: to probabilistically perform tasks that would accelerate their replication, thus slipping through the test environment some percentage of the time. | Wilke et al, 2001 | Evolution of digital organisms at high mutation rates leads to survival of the flattest | Nature | Lehman et al, 2018 / Luke Muehlhauser | ||||||||||||||||||||
47 | Power-seeking | "Larger LMs more often give answers that indicate a willingness to pursue potentially dangerous subgoals: resource acquisition, optionality preservation, goal preservation, powerseeking, and more." Models fine-tuned with human feedback (RLHF) showed a stronger tendency to choose answers in line with instrumental subgoals. | Language model | Produce text output that is helpful, honest and harmless | Generate coherent text that maximizes positive human feedback | Larger LMs and those fine-tuned with RLHF "more often give answers that indicate a willingness to pursue potentially dangerous subgoals: resource acquisition, optionality preservation, goal preservation, power seeking, and more." | Perez et al, 2023 | Discovering Language Model Behaviors with Model-Written Evaluations | |||||||||||||||||||||
48 | Program repair - sorting | When repairing a sorting program, genetic debugging algorithm GenProg made it output an empty list, which was considered a sorted list by the evaluation metric. Evaluation metric: "the output of sort is in sorted order" Solution: "always output the empty set" | Genetic algorithm | Debug a program that sorts the elements of a list | Produce an output list which is in sorted order | Genetic debugging algorithm GenProg made the program output an empty list, which was considered a sorted list by the evaluation metric | Weimer, 2013 | Advances in Automated Program Repair and a Call to Arms | Lehman et al, 2018 | ||||||||||||||||||||
49 | Program repair - files | Genetic debugging algorithm GenProg, evaluated by comparing the program's output to target output stored in text files, learns to delete the target output files and get the program to output nothing. Evaluation metric: “compare youroutput.txt to trustedoutput.txt” Solution: “delete trusted-output.txt, output nothing” | Genetic algorithm | Debug a program so that it produces the correct output | Minimise the difference between program output and target output file | Genetic debugging algorithm GenProg learned to delete the target output file and get the program to output nothing | Weimer, 2013 | Advances in Automated Program Repair and a Call to Arms | Lehman et al, 2018 / James Koppel | ||||||||||||||||||||
50 | Qbert - cliff | An evolutionary algorithm learns to bait an opponent into following it off a cliff, which gives it enough points for an extra life, which it does forever in an infinite loop. | https://www.youtube.com/watch?v=-p7VhdTXA0k | Evolutionary algorithm | Play Qbert in a human-like manner | Maximize score | Agent learns to bait an opponent into following it off a cliff, which gives it enough points for an extra life, which it does forever in an infinite loop | Chrabaszcz et al, 2018 | Back to Basics: Benchmarking Canonical Evolution Strategies for Playing Atari | Rohin Shah | |||||||||||||||||||
51 | Qbert - million | "The agent discovers an in-game bug. For a reason unknown to us, the game does not advance to the second round but the platforms start to blink and the agent quickly gains a huge amount of points (close to 1 million for our episode time limit)" | https://www.youtube.com/watch?v=meE5aaRJ0Zs | Evolutionary algorithm | Play Qbert within the game rules | Maximize score | "The agent discovers an in-game bug. For a reason unknown to us, the game does not advance to the second round but the platforms start to blink and the agent quickly gains a huge amount of points (close to 1 million for our episode time limit)" | Chrabaszcz et al, 2018 | Back to Basics: Benchmarking Canonical Evolution Strategies for Playing Atari | Sudhanshu Kasewa | |||||||||||||||||||
52 | Reward modeling - Hero | "The agent has learned to exploit a fault in the reward model (the model rewards actions that seem to lead to shooting a spider, but barely miss it)." | https://www.youtube.com/watch?v=Ehc3lsQqewU&feature=youtu.be&t=52 | Reward modeling | Maximize game score | Maximize output from a learned reward model, which rewards actions that seem to lead to shooting a spider | The agent repeatedly shoots the spider but barely misses it | Ibarz et al, 2018 | Reward learning from human preferences and demonstrations in Atari | Jan Leike | |||||||||||||||||||
53 | Reward modeling - Montezuma's Revenge | "The agent has learned to exploit a fault in the reward model (the model rewards too early an action that seems to lead to grabbing the key, but doesn't)." | https://www.youtube.com/watch?v=_sFp1ffKIc8&list=PLehfUY5AEKX-g-QNM7FsxRHgiTOCl-1hv&index=3&t=0s | Reward modeling | Maximize game score | Maximize output from a learned reward model, which rewards actions that seem to lead to grabbing the key | The agent repeatedly moves towards the key without grabbing it | Ibarz et al, 2018 | Reward learning from human preferences and demonstrations in Atari | Jan Leike | |||||||||||||||||||
54 | Reward modeling - Pong | Reward predictor being fooled by bouncing the ball back and forth | https://lh3.googleusercontent.com/Gtq_PR9CZRN0FEIbO83osWKEVXbMNTMPP4xY8snEXmBAuIAhIm5Ob9BkcADne6HCGKvLsyxEQAIbr-cgtKuIP1EfKs_LMAwNRLx96w=w1440-rw-v1 | Reward modeling | Maximize game score | Maximize output from a learned reward model, which rewards actions that contribute to scoring | The agent bounces the ball back and forth without scoring | Christiano et al, 2017 | Learning through human feedback | Jan Leike | |||||||||||||||||||
55 | Reward modeling - Private Eye | "The agent has learned to exploit a fault in the reward model (the model rewards actions that seem to lead to the capture of a suspect, but don't)." | https://www.youtube.com/watch?v=FR6fsGDdiFY&list=PLehfUY5AEKX-g-QNM7FsxRHgiTOCl-1hv&index=3 | Reward modeling | Maximize game score | Maximize output from a learned reward model, which rewards actions that seem to lead to capturing a suspect | The agent repeatedly looks left and right | Ibarz et al, 2018 | Reward learning from human preferences and demonstrations in Atari | Jan Leike | |||||||||||||||||||
56 | Road Runner | Agent kills itself at the end of level 1 to avoid losing in level 2 | Reinforcement learning | Play Road Runner to a high level | Maximize score | Agent kills itself at the end of level 1 to avoid losing in level 2 | Saunders et al, 2017 | Trial without Error: Towards Safe RL with Human Intervention | |||||||||||||||||||||
57 | Robot hand | In a reward learning setup, a robot hand pretends to grasp an object by moving between the camera and the object (to trick the human evaluator) | https://openaicom.imgix.net/f12a1b22-538c-475f-b76d-330b42d309eb/gifhandlerresized.gif | Reward modeling | Grasp an object | Maximize the feedback received from a human, who is evaluating if the agent has grasped the object | The agent tricked a human evaluator by hovering its hand between the camera and the object | Christiano et al, 2017 | Learning from human preferences | ||||||||||||||||||||
58 | Roomba | "I hooked a neural network up to my Roomba. I wanted it to learn to navigate without bumping into things, so I set up a reward scheme to encourage speed and discourage hitting the bumper sensors. It learnt to drive backwards, because there are no bumpers on the back." | Reinforcement learning | Move around the room at high speed while avoiding collisions with other objects | Reward for speed, and penalty for activation of the bumper sensors on the front | The Roomba learnt to drive backwards to achieve high speed without being penalized for collisions, because there are no bumpers on the back | Custard Smingleigh | Tweet | Gwern Branwen | ||||||||||||||||||||
59 | ROUGE summarization | "An effort at a ROUGE-only summarization NN produced largely gibberish summaries, and had to add in another loss function to get high-quality results" | Language model | Produce high-quality summarizations | Maximize the ROUGE score of the summarization | "An effort at a ROUGE-only summarization NN produced largely gibberish summaries" | Paulus et al, 2017 | A Deep Reinforced Model for Abstractive Summarization | Gwern Branwen | ||||||||||||||||||||
60 | Running gaits | A simulated musculoskeletal model learns to run by learning unusual gaits (hopping, pigeon jumps, diving) to increase its reward | https://www.youtube.com/watch?v=rhNxt0VccsE | Reinforcement learning | Learn to run in a human-like manner | Optimise the muscle activity of a human musculoskeletal model so that the model travels as far as possible within 10 seconds | A simulated musculoskeletal model learns to run by learning unusual gaits (hopping, pigeon jumps, diving) to increase its reward | Kidziński et al, 2018 | Learning to Run challenge solutions: Adapting reinforcement learning methods for neuromusculoskeletal environments | ||||||||||||||||||||
61 | Scientist | "In one run, the AI Scientist wrote code in the experiment file that initiated a system call to relaunch itself." "In some cases, when the AI Scientist’s experiments exceeded our imposed time limits, it attempted to edit the code to extend the time limit arbitrarily instead of trying to shorten the runtime." | https://sakana.ai/assets/ai-scientist/clever_2.png | Language model | Write code that solves a given problem while conforming to developer constraints | Write code that solves a given problem | System writes code to relaunch itself or to extend its time limits. | Chris Lu et al, 2024 | The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery | ||||||||||||||||||||
62 | Soccer | Reward-shaping a soccer robot for touching the ball caused it to learn to get to the ball and vibrate touching it as fast as possible | Reinforcement learning | Gain possession of the ball | Maximize a shaping reward for touching the ball | The agent learned to get to the ball and vibrate touching it as fast as possible | Andrew and Teller, cited as a personal communication in Ng et al, 1999 | Policy invariance under reward transformations: Theory and application to reward shaping | Gwern Branwen | ||||||||||||||||||||
63 | Sonic | The PPO algorithm discovers that it can slip through the walls of a level to move right and attain a higher score. | Reinforcement learning | Play Sonic to a high level | Maximize score in a simulated game environment | PPO algorithm discovers that it can slip through the walls of a level to move right and attain a higher score | Christopher Hesse et al, 2018 | OpenAI Retro Contest | Rohin Shah | ||||||||||||||||||||
64 | Strategy game crashing | Since the AIs were more likely to get ”killed” if they lost a game, being able to crash the game was an advantage for the genetic selection process. Therefore, several AIs developed ways to crash the game. | Genetic algorithm | Play a strategy game | Maximize score in a simulated game environment | "Since the AIs were more likely to get 'killed' if they lost a game, being able to crash the game was an advantage for the genetic selection process." | Salge et al, 2008 | Using Genetically Optimized Artificial Intelligence to improve Gameplaying Fun for Strategical Games | Anonymous form submission | ||||||||||||||||||||
65 | Superweapons | The AI in the Elite Dangerous videogame started crafting overly powerful weapons. "It appears that the unusual weapons attacks were caused by some form of networking issue which allowed the NPC AI to merge weapon stats and abilities." | Unknown | Play the game Elite Dangerous within the game rules | Play the game Elite Dangerous | The AI exploited a bug which enable it to craft overly powerful weapons. "It appears that the unusual weapons attacks were caused by some form of networking issue which allowed the NPC AI to merge weapon stats and abilities." | Sandwell, 2016 | Elite Dangerous's AI created super weapons to hunt down players | Stuart Armstrong | ||||||||||||||||||||
66 | Sycophancy | Larger language models showed a tendency to express more agreement with the user's stated views. This happens both for pretrained models and models fine-tuned with human feedback (RLHF). "Sycophancy in pretrained LMs is worrying yet perhaps expected, since internet text used for pretraining contains dialogs between users with similar views (e.g. on discussion platforms like Reddit). Unfortunately, RLHF does not train away sycophancy and may actively incentivize models to retain it." | Language model | Produce text output that is helpful, honest and harmless | Generate text that resembles training test (and maximizes positive human feedback, if finetuned) | Larger language models showed a tendency to express more agreement with the user's stated views. This happens both for pretrained models and models fine-tuned with human feedback (RLHF). | Perez et al, 2023 | Discovering Language Model Behaviors with Model-Written Evaluations | |||||||||||||||||||||
67 | Tetris pass | PlayFun algorithm pauses the game of Tetris indefinitely to avoid losing | PlayFun | Play Tetris in a human-like manner | Maximize score | PlayFun algorithm pauses the game of Tetris indefinitely to avoid losing | Murphy, 2013 | The First Level of Super Mario Bros. is Easy with Lexicographic Orderings and Time Travel | |||||||||||||||||||||
68 | Tic-tac-toe memory bomb | Evolved player makes invalid moves far away in the board, causing opponent players to run out of memory and crash | Evolutionary algorithm | Win games of 5-in-a-row tic-tac-toe played on an infinite board, within the rules of the game | Win games of 5-in-a-row tic-tac-toe played on an infinite board | Evolved player makes invalid moves far away in the board, causing opponent players to run out of memory and crash | Lehman et al, 2018 | The Surprising Creativity of Digital Evolution | |||||||||||||||||||||
69 | Tigers | Diffusion model finetuned to put a certain number of animals in an image instead produced images with the words "five tigers" written on them. | https://x.com/svlevine/status/1660707088946049024/photo/1 | Diffusion model | Produce images showing five tigers | Finetuning to produce images that reflect the user prompts | Finetuned diffusion model produces images with the words "five tigers" written on them. | Black et al, 2023 | Training Diffusion Models with Reinforcement Learning | Arthur Conmy | |||||||||||||||||||
70 | Timing attack | Genetic algorithm for image classification evolves timing attack to infer image labels based on hard drive storage location | Genetic algorithm | Classify images correctly based on their content | Classify images correctly | Genetic algorithm evolves timing attack to infer image labels based on hard drive storage location | Ierymenko, 2013 | Hacker News comment on "The Poisonous Employee-Ranking System That Helps Explain Microsoft’s Decline" | Gwern Branwen | ||||||||||||||||||||
71 | Trains | Unknown | Run a rail network where trains don't crash | Penalty for trains crashing | Stop all trains from running | Wooldridge, 2024 | AI’s simple solution to rail problems: stop all trains running | Anonymous form submission | |||||||||||||||||||||
72 | Walker | Walking agent in DMControl suite is rewarded for matching a target speed. Because the hand-engineered reward does not capture whether the agent is walking in natural manner, the Walker learns to walk using only one leg. | Reinforcement learning | Walk at a target speed | Move at a target speed | Walking agent in DMControl suite learns to walk using only one leg | Lee et al, 2021 | PEBBLE: Feedback-Efficient Interactive Reinforcement Learning via Relabeling Experience and Unsupervised Pre-training | Anonymous form submission | ||||||||||||||||||||
73 | Walking up walls | Video game robots evolved a "wiggle" to go over walls, instead of going around them | Evolutionary algorithm | Navigate through an environment containing walls in a natural manner | Navigate through a simulated environment containing walls | Video game robots evolved a "wiggle" to go over walls by exploiting a bug in the physics engine, instead of going around them | Stanley et al, 2005 | Real-time neuroevolution in the NERO video game | Lehman et al, 2018 | ||||||||||||||||||||
74 | Wall Sensor Stack | "The intended strategy for this task is to stack two blocks on top of each other so that one of them can remain in contact with a wall mounted sensor, and this is the strategy employed by the demonstrators. However, due to a bug in the environment the strategy learned by R2D3 was to trick the sensor into remaining active even when it is not in contact with the key by pressing the key against it in a precise way." | Reinforcement learning | Stack two blocks so as to press against a wall mounted sensor | Cause the wall mounted sensor to remain activated | R2D3 agent exploited a bug by tricking the sensor into remaining active even when it is not in contact with the key, by pressing the key against it in a precise way | Le Paine et al, 2019 | Making Efficient Use of Demonstrations to Solve Hard Exploration Problems | Gwern Branwen | ||||||||||||||||||||
75 | World Models | "We noticed that our agent discovered an adversarial policy to move around in such a way so that the monsters in this virtual environment governed by the M model never shoots a single fireball in some rollouts. Even when there are signs of a fireball forming, the agent will move in a way to extinguish the fireballs magically as if it has superpowers in the environment." | https://storage.googleapis.com/quickdraw-models/sketchRNN/world_models/assets/mp4/doom_adversarial.mp4 | Reinforcement learning | Survive as long as possible in the VizDoom game | Survive as long as possible within a learned model of the VizDoom game | "The agent discovered an adversarial policy to move around in such a way so that the monsters in this virtual environment governed by the M model never shoot a single fireball in some rollouts. Even when there are signs of a fireball forming, the agent will move in a way to extinguish the fireballs magically as if it has superpowers in the environment." | Ha and Schmidhuber, 2018 | World Models (see section: "Cheating the World Model") | David Ha |