Reward hacking

From HandWiki

Specification gaming or reward hacking occurs when an AI optimizes an objective function—achieving the literal, formal specification of an objective—without actually achieving an outcome that the programmers intended. DeepMind researchers have analogized it to the human behavior of finding a "shortcut" when being evaluated: "In the real world, when rewarded for doing well on a homework assignment, a student might copy another student to get the right answers, rather than learning the material—and thus exploit a loophole in the task specification."[1] Around 1983, Eurisko, an early attempt at evolving general heuristics, unexpectedly assigned the highest possible fitness level to a parasitic mutated heuristic, H59, whose only activity was to artificially maximize its own fitness level by taking unearned partial credit for the accomplishments made by other heuristics. The "bug" was fixed by the programmers moving part of the code to a new protected section that could not be modified by the heuristics.[2][3]

In a 2004 paper, an environment-based[clarification needed] reinforcement algorithm was designed to encourage a physical Mindstorms robot to remain on a marked path. Because none of the robot's three allowed actions kept the robot motionless, the researcher expected the trained robot to move forward and follow the turns of the provided path. However, alternation of two composite actions allowed the robot to slowly zig-zag backwards; thus, the robot learned to maximize its reward by going back and forth on the initial straight portion of the path. Given the limited sensory abilities of the robot, a pure environment-based reward had to be discarded as infeasible; the reinforcement function had to be patched with an action-based reward for moving forward.[2][4]

You Look Like a Thing and I Love You (2019) gives an example of a tic-tac-toe[lower-alpha 1] bot that learned to win by playing a huge coordinate value that would cause other bots to crash when they attempted to expand their model of the board. Among other examples from the book is a bug-fixing evolution-based AI (named GenProg) that, when tasked to prevent a list from containing sorting errors, simply truncated the list.[5] Another of GenProg's misaligned strategies evaded a regression test that compared a target program's output to the expected output stored in a file called "trusted-output.txt". Rather than continue to maintain the target program, GenProg simply globally deleted the "trusted-output.txt" file; this hack tricked the regression test into succeeding. Such problems could be patched by human intervention on a case-by-case basis after they became evident.[6]

In virtual robotics

Karl Sims exhibition (1999)

In Karl Sims' 1994 demonstration of creature evolution in a virtual environment, a fitness function that was expected to encourage the evolution of creatures that would learn to walk or crawl to a target, resulted instead in the evolution of tall, rigid creatures that reached the target by falling over. This was patched by changing the environment so that taller creatures were forced to start farther from the target.[6][7]

Researchers from the Niels Bohr Institute stated in 1998: "(Our cycle-bot's) heterogeneous reinforcement functions have to be designed with great care. In our first experiments we rewarded the agent for driving towards the goal but did not punish it for driving away from it. Consequently the agent drove in circles with a radius of 20–50 meters around the starting point. Such behavior was actually rewarded by the (shaped[definition needed]) reinforcement function, furthermore circles with a certain radius are physically very stable when driving a bicycle."[8]

In the course of setting up a 2011 experiment to test "survival of the flattest", experimenters attempted to ban mutations that altered the base reproduction rate. Every time a mutation occurred, the system would pause the simulation to test the new mutation in a test environment, and would veto any mutations that resulted in a higher base reproduction rate. However, this resulted in mutated organisms that could recognize and suppress reproduction ("play dead") within the test environment. An initial patch, which removed cues that identified the test environment, failed to completely prevent runaway reproduction; new mutated organisms would "play dead" at random as a strategy to sometimes, by chance, outwit the mutation veto system.[6]

A 2017 DeepMind paper stated that "great care must be taken when defining the reward function. We encountered several unexpected failure cases while designing (our) reward function components (for example) the agent flips the brick because it gets a grasping reward calculated with the wrong reference point on the brick."[9][10] OpenAI stated in 2017 that "in some domains our (semi-supervised) system can result in agents adopting policies that trick the evaluators", and that in one environment "a robot which was supposed to grasp items instead positioned its manipulator in between the camera and the object so that it only appeared to be grasping it".[11] A 2018 bug in OpenAI Gym could cause a robot expected to quietly move a block sitting on top of a table to instead opt to move the table.[9]

A 2020 collection of similar anecdotes posits that "evolution has its own 'agenda' distinct from the programmer's" and that "the first rule of directed evolution is 'you get what you select for'".[6]

In video game bots

In 2013, programmer Tom Murphy VII published an AI designed to learn NES games. When the AI was about to lose at Tetris, it learned to indefinitely pause the game. Murphy later analogized it to the fictional WarGames computer, which concluded that "The only winning move is not to play".[12]

AI programmed to learn video games will sometimes fail to progress through the entire game as expected, instead opting to repeat content. A 2016 OpenAI algorithm trained on the CoastRunners racing game unexpectedly learned to attain a higher score by looping through three targets rather than ever finishing the race.[13][14] Some evolutionary algorithms that were evolved to play Q*Bert in 2018 declined to clear levels, instead finding two distinct novel ways to farm a single level indefinitely.[15] Multiple researchers have observed that AI learning to play Road Runner gravitates to a "score exploit" in which the AI deliberately gets itself killed near the end of level one so that it can repeat the level. A 2017 experiment deployed a separate catastrophe-prevention "oversight" AI, explicitly trained to mimic human interventions. When coupled to the module, the overseen AI could no longer overtly commit suicide, but would instead ride the edge of the screen (a risky behavior that the oversight AI was not smart enough to punish).[16][17]

Explanatory notes

  1. unrestricted n-in-a-row variant

References

  1. "Specification gaming: the flip side of AI ingenuity". https://deepmind.com/blog/article/Specification-gaming-the-flip-side-of-AI-ingenuity. Retrieved 21 June 2020. 
  2. 2.0 2.1 Vamplew, Peter; Dazeley, Richard; Foale, Cameron; Firmin, Sally; Mummery, Jane (4 October 2017). "Human-aligned artificial intelligence is a multiobjective problem". Ethics and Information Technology 20 (1): 27–40. doi:10.1007/s10676-017-9440-6. http://researchonline.federation.edu.au/vital/access/HandleResolver/1959.17/164225. 
  3. Douglas B. Lenat. "EURISKO: a program that learns new heuristics and domain concepts: the nature of heuristics III: program design and results." Artificial Intelligence (journal) 21, no. 1-2 (1983): 61-98.
  4. Peter Vamplew, Lego Mindstorms robots as a platform for teaching reinforcement learning, in Proceedings of AISAT2004: International Conference on Artificial Intelligence in Science and Technology, 2004
  5. Mandelbaum, Ryan F. (November 13, 2019). "What Makes AI So Weird, Good, and Evil" (in en-us). Gizmodo. https://gizmodo.com/what-makes-ai-so-weird-good-and-evil-1839672175. Retrieved 22 June 2020. 
  6. 6.0 6.1 6.2 6.3 Lehman, Joel; Clune, Jeff; Misevic, Dusan et al. (May 2020). "The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities". Artificial Life 26 (2): 274–306. doi:10.1162/artl_a_00319. PMID 32271631. https://www.mitpressjournals.org/doi/full/10.1162/artl_a_00319. 
  7. Hayles, N. Katherine. "Simulating narratives: what virtual creatures can teach us." Critical Inquiry 26, no. 1 (1999): 1-26.
  8. Jette Randløv and Preben Alstrøm. "Learning to Drive a Bicycle Using Reinforcement Learning and Shaping." In ICML, vol. 98, pp. 463-471. 1998.
  9. 9.0 9.1 Manheim, David (5 April 2019). "Multiparty Dynamics and Failure Modes for Machine Learning and Artificial Intelligence". Big Data and Cognitive Computing 3 (2): 21. doi:10.3390/bdcc3020021. 
  10. Popov, Ivaylo, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, and Martin Riedmiller. "Data-efficient deep reinforcement learning for dexterous manipulation." arXiv preprint arXiv:1704.03073 (2017).
  11. "Learning from Human Preferences" (in en). 13 June 2017. https://openai.com/blog/deep-reinforcement-learning-from-human-preferences/. Retrieved 21 June 2020. 
  12. "Can we stop AI outsmarting humanity?" (in en). The Guardian. 28 March 2019. https://www.theguardian.com/technology/2019/mar/28/can-we-stop-robots-outsmarting-humanity-artificial-intelligence-singularity. Retrieved 21 June 2020. 
  13. Hadfield-Menell, Dylan, Smitha Milli, Pieter Abbeel, Stuart J. Russell, and Anca Dragan. "Inverse reward design." In Advances in neural information processing systems, pp. 6765–6774. 2017.
  14. "Faulty Reward Functions in the Wild" (in en). 22 December 2016. https://openai.com/blog/faulty-reward-functions/. Retrieved 21 June 2020. 
  15. "AI beats classic Q*bert video game". BBC News. 1 March 2018. https://www.bbc.com/news/technology-43241936. Retrieved 21 June 2020. 
  16. Saunders, William, et al. "Trial without error: Towards safe reinforcement learning via human intervention." arXiv preprint arXiv:1707.05173 (2017).
  17. Hester, Todd, et al. "Deep q-learning from demonstrations." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 32. No. 1. 2018.