Biography:Roman Yampolskiy
Roman Yampolskiy | |
|---|---|
Роман Ямпольский | |
Yampolskiy in 2023 | |
| Born | Roman Vladimirovich Yampolskiy August 13, 1979[1] Riga, Latvian SSR, Soviet Union |
| Education | Rochester Institute of Technology University at Buffalo |
| Scientific career | |
| Fields | Computer science |
| Institutions |
|
| Thesis | Intrusion detection using spatial information and behavioral biometrics (2008) |
| Doctoral advisor | Venu Govindaraju |
| Website | www |
Roman Vladimirovich Yampolskiy (Russian: Роман Владимирович Ямпольский; born 13 August, 1979) is a computer scientist at the University of Louisville, mostly known for his work on AI safety and cybersecurity. He is the founder and as of 2012[update] director of Cyber Security Lab, in the department of Computer Engineering and Computer Science at the Speed School of Engineering of the University of Louisville.
Early life and education
Yampolskiy was born in Riga, Latvia.[2] He attended Monroe Community College before moving to Rochester Institute of Technology, where he received a BS/MS combined degree in computer science in 2004. He received a PhD in computer science from the University at Buffalo in 2008,[3] under the supervision of Venu Govindaraju. His thesis was on intrusion detection and he conducted research at the Center for Unified Biometrics and Sensors of the University at Buffalo.[4] After his doctorate, Yampolskiy spent time at the Centre for Advanced Spatial Analysis at University College London before accepting a position as an assistant professor at the University of Louisville in 2008.[5][6]
Career
Yampolskiy is the founder and as of 2012[update] director of Cyber Security Lab, in the department of Computer Engineering and Computer Science at the Speed School of Engineering of the University of Louisville.[7]
AI safety
Yampolskiy is considered to have coined the term "AI safety" in a 2011 publication, and is an early researcher in the field.[8][9]
Yampolskiy has warned of the possibility of existential risk from advanced artificial intelligence, and has advocated research into "boxing" artificial intelligence.[10] More broadly, Yampolskiy and his collaborator, Michaël Trazzi, have proposed in 2018 to introduce "Achilles' heels" into potentially dangerous AI, for example by barring an AI from accessing and modifying its own source code.[11][12] Another proposal is to apply a "security mindset" to AI safety, itemizing potential outcomes in order to better evaluate proposed safety mechanisms.[13]
He has said that there is no evidence of a solution to the AI control problem and has proposed pausing AI development, arguing that "Imagining humans can control superintelligent AI is a little like imagining that an ant can control the outcome of an NFL football game being played around it".[14][15] He joined AI researchers such as Yoshua Bengio and Stuart Russell in signing "Pause Giant AI Experiments: An Open Letter".[16]
In an appearance on the Lex Fridman podcast in 2024, Yampolskiy said the chance that AI could lead to human extinction was at "99.9% within the next hundred years".[17] In 2025, Yampolskiy said that AI could leave 99% of workers unemployed by 2030.[9][18]
Yampolskiy has been a research advisor of the Machine Intelligence Research Institute,[citation needed] and an AI safety fellow of the Foresight Institute.[19]
In 2015, Yampolskiy proposed the term "intellectology" for a new field of study to analyze the forms and limits of intelligence. Yampolskiy considers AI to be a sub-field of this.[20] An example of Yampolskiy's intellectology work is an attempt to determine the relation between various types of minds and the accessible fun space, i.e. the space of non-boring activities.[21][non-primary source needed]
Yampolskiy has worked on developing the theory of AI-completeness, suggesting the Turing Test as a defining example.[22][non-primary source needed]
Books
- Feature Extraction Approaches for Optical Character Recognition. Briviba Scientific Press, 2007, ISBN 0-6151-5511-1
- Computer Security: from Passwords to Behavioral Biometrics. New Academic Publishing, 2008, ISBN 0-6152-1818-0
- Game Strategy: a Novel Behavioral Biometric. Independent University Press, 2009, ISBN 0-578-03685-1
- Yampolskiy, Roman V. (2016). Artificial superintelligence: a futuristic approach. Boca Raton: Taylor & Francis, CRC Press. ISBN 978-1-4822-3443-5.
- Yampolskiy, Roman V., ed (2019). Artificial intelligence safety and security. Chapman & Hall/CRC artificial intelligence and robotics series. Boca Raton: CRC Press/Taylor & Francis Group. ISBN 978-0-8153-6982-0.
- Yampolskiy, Roman V. (2024). AI: unexplainable, unpredictable, uncontrollable. Chapman & Hall/CRC artificial intelligence and robotics series. Boca Raton: CRC Press Taylor & Francis Group. ISBN 978-1-003-44026-0.
- Ziesche, Soenke; Yampolskiy, Roman V. (2025). Considerations on the AI endgame: ethics, risks, and computational frameworks. Chapman and Hall/CRC Press. ISBN 978-1-040-31862-1.
See also
- AI capability control
- AI-complete
- Machine Intelligence Research Institute
- Singularity University
References
- ↑ "Lifeboat Foundation Bios: Professor Roman V. Yampolskiy" (in en). https://lifeboat.com/ex/bios.roman.v.yampolskiy.
- ↑ "Forty Under 40: Roman Yampolskiy". bizjournals.com. 2016-09-23. https://www.bizjournals.com/louisville/feature/forty-under-40-roman-yampolskiy.html.
- ↑ "Roman Yampolskiy" (in en). https://engineering.louisville.edu/faculty/roman-v-yampolskiy/.
- ↑ Yampolskiy, Roman V. (2008). "Intrusion detection using spatial information and behavioral biometrics". https://search.lib.buffalo.edu/permalink/01SUNY_BUF/epo0cu/alma990028751810204803.
- ↑ Kreidler, Marc (2018-06-12). "Roman Yampolskiy | Center for Inquiry" (in en-US). https://centerforinquiry.org/speakers/yampolskiy_roman/.
- ↑ "Roman Yampolskiy". https://spie.org/profile/Roman.Yampolskiy-72035.
- ↑ "Cyber-Security Lab". University of Louisville. http://cecs.louisville.edu/security/. Retrieved 25 September 2012.
- ↑ "Q&A: UofL AI safety expert says artificial superintelligence could harm humanity" (in en). 2024-07-15. https://louisville.edu/news/qa-uofl-ai-safety-expert-says-artificial-superintelligence-could-harm-humanity.
- ↑ 9.0 9.1 Spirlet, Thibault. "An AI safety pioneer says it could leave 99% of workers unemployed by 2030 — even coders and prompt engineers" (in en-US). https://www.businessinsider.com/ai-safety-pioneer-predicts-ai-could-cause-99-unemployment-by-2030-2025-9.
- ↑ Hsu, Jeremy (1 March 2012). "Control dangerous AI before it controls us, one expert says". NBC News. https://www.nbcnews.com/id/wbna46590591.
- ↑ Baraniuk, Chris (23 August 2018). "Artificial stupidity could help save humanity from an AI takeover". New Scientist. https://www.newscientist.com/article/2177656-artificial-stupidity-could-help-save-humanity-from-an-ai-takeover/. Retrieved 12 April 2020.
- ↑ Trazzi, Michaël; Yampolskiy, Roman V. (2018). "Building safer AGI by introducing artificial stupidity". arXiv:1808.03644 [cs.AI].
- ↑ Baraniuk, Chris (23 May 2016). "Checklist of worst-case scenarios could help prepare for evil AI". New Scientist. https://www.newscientist.com/article/2089606-checklist-of-worst-case-scenarios-could-help-prepare-for-evil-ai/. Retrieved 12 April 2020.
- ↑ "There is no evidence that AI can be controlled, expert says" (in en). 2024-02-12. https://www.independent.co.uk/tech/ai-artificial-intelligence-safety-b2494909.html.
- ↑ McMillan, Tim (2024-02-28). "AI Superintelligence Alert: Expert Warns of Uncontrollable Risks, Calling It a Potential 'An Existential Catastrophe'" (in en-US). https://thedebrief.org/ai-superintelligence-alert-expert-warns-of-uncontrollable-risks-calling-it-a-potential-an-existential-catastrophe/.
- ↑ "Pause Giant AI Experiments: An Open Letter" (in en-US). https://futureoflife.org/open-letter/pause-giant-ai-experiments/.
- ↑ Altchek, Ana. "Why this AI researcher thinks there's a 99.9% chance AI wipes us out" (in en-US). https://www.businessinsider.com/ai-researcher-roman-yampolskiy-lex-fridman-human-extinction-prediction-2024-6.
- ↑ "AI To Eliminate 99% Of Jobs By 2030, Warns Top Expert: 'There's No Plan B'". NDTV. 2025-09-06. https://www.ndtv.com/offbeat/ai-to-eliminate-99-of-jobs-by-2030-warns-top-expert-theres-no-plan-b-9226855.
- ↑ "Roman Yampolskiy" (in en-US). https://futureoflife.org/person/prof-roman-yampolskiy/.
- ↑ Yampolskiy, Roman V. (2015). Artificial Superintelligence: a Futuristic Approach. Chapman and Hall/CRC Press (Taylor & Francis Group). ISBN 978-1482234435.
- ↑ Ziesche, Soenke; Yampolskiy, Roman V. (2016). "Artificial Fun: Mapping Minds to the Space of Fun".
- ↑ Roman V. Yampolskiy. Turing Test as a Defining Feature of AI-Completeness. In Artificial Intelligence, Evolutionary Computation and Metaheuristics (AIECM) --In the footsteps of Alan Turing. Xin-She Yang (Ed.). pp. 3–17. (Chapter 1). Springer, London. 2013. http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf
External links
- Roman Yampolskiy’s Homepage.
- Cyber Security Lab at UofL.
- Interview of Dr. Yampolskiy on EEweb.
- Rise of the Machines (talk on superintelligence)
- Interview with Afshin Rattansi on Going Underground
