Biography:Nick Bostrom
Nick Bostrom | |
---|---|
Nick Bostrom, 2014 | |
Born | Niklas Boström 10 March 1973 Helsingborg, Sweden |
Education |
|
Awards |
|
Era | Contemporary philosophy |
Region | Western philosophy |
School | Analytic philosophy[1] |
Institutions | St Cross College, Oxford Future of Humanity Institute |
Thesis | Observational Selection Effects and Probability |
Main interests | Philosophy of artificial intelligence Bioethics |
Notable ideas | Anthropic bias Reversal test Simulation hypothesis Existential risk Singleton Ancestor simulation |
Website | NickBostrom.com |
Nick Bostrom (English: /ˈbɒstrəm/; Swedish: Niklas Boström, IPA: [²buːstrœm]; born 10 March 1973)[2] is a Swedish philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, and the reversal test. In 2011, he founded the Oxford Martin Programme on the Impacts of Future Technology,[3] and he is currently the founding director of the Future of Humanity Institute[4] at Oxford University.
Bostrom is the author of over 200 publications,[5] including Superintelligence (2014), a New York Times bestseller[6] and Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002).[7] In 2009 and 2015, he was included in Foreign Policy's Top 100 Global Thinkers list.[8][9] Bostrom is best known for arguing that, although there are potentially great benefits from artificial intelligence, it may pose a catastrophic risk to humanity if the problems of control and alignment are not solved before artificial general intelligence is developed. His work on superintelligence and his concern for its existential risk to humanity over the coming century have brought both Elon Musk and Bill Gates to similar thinking.[10][11]
Biography
Born as Niklas Boström in 1973[12] in Helsingborg, Sweden,[5] he disliked school at a young age, and he ended up spending his last year of high school learning from home. He sought to educate himself in a wide variety of disciplines, including anthropology, art, literature, and science.[1] Despite what has been called a "serious mien", he once did some turns on London's stand-up comedy circuit.[5]
He holds a B.A. in philosophy, mathematics, logic and artificial intelligence from the University of Gothenburg and master's degrees in philosophy and physics, and computational neuroscience from Stockholm University and King's College London, respectively. During his time at Stockholm University, he researched the relationship between language and reality by studying the analytic philosopher W. V. Quine.[1] In 2000, he was awarded a PhD in philosophy from the London School of Economics. He held a teaching position at Yale University (2000–2002), and he was a British Academy Postdoctoral Fellow at the University of Oxford (2002–2005).[7][13]
Views
Existential risk
Aspects of Bostrom's research concern the future of humanity and long-term outcomes.[14][15] He introduced the concept of an existential risk,[1] which he defines as one in which an "adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential." In the 2008 volume Global Catastrophic Risks, editors Bostrom and Milan Ćirković characterize the relation between existential risk and the broader class of global catastrophic risks, and link existential risk to observer selection effects[16] and the Fermi paradox.[17][18]
In 2005, Bostrom founded the Future of Humanity Institute,[1] which researches the far future of human civilization. He is also an adviser to the Centre for the Study of Existential Risk.[15]
Superintelligence
Human vulnerability in relation to advances in AI
In his 2014 book Superintelligence, Bostrom reasoned that "the creation of a superintelligent being represents a possible means to the extinction of mankind".[19] Bostrom argues that a computer with near human-level general intellectual ability could initiate an intelligence explosion on a digital time scale with the resultant rapid creation of something so powerful that it might deliberately or accidentally destroy human kind.[20] Bostrom contends the power of a superintelligence would be so great that a task given to it by humans might be taken to open ended extremes, for example a goal of calculating Pi could collaterally cause nanotechnology manufactured facilities to sprout over the entire Earth's surface and cover it within days.[21] He believes the existential risk to humanity would be greatest almost immediately after super intelligence is brought into being, thus creating an exceedingly difficult problem of finding out how to control such an entity before it actually exists.[20]
Warning that a human-friendly prime directive for AI would rely on the absolute correctness of the human knowledge it was based on, Bostrom points to the lack of agreement among most philosophers as an indication that most philosophers are wrong, and the possibility that a fundamental concept of current science may be incorrect. Bostrom says that there are few precedents to guide an understanding of what pure non-anthropocentric rationality would dictate for a potential Singleton AI being held in quarantine.[22] Noting that both John von Neumann and Bertrand Russell advocated a nuclear strike, or the threat of one, to prevent the Soviets acquiring the atomic bomb, Bostrom says the relatively unlimited means of superintelligence might make for its analysis moving along different lines to the evolved "diminishing returns" assessments that in humans confer a basic aversion to risk.[23] Group selection in predators working by means of cannibalism shows the counter-intuitive nature of non-anthropocentric "evolutionary search" reasoning, and thus humans are ill-equipped to perceive what an artificial intelligence's intentions would be.[24] Accordingly, it cannot be discounted that any Superintelligence would ineluctably pursue an 'all or nothing' offensive action strategy in order to achieve hegemony and assure its survival.[25] Bostrom notes that even current programs have, "like MacGyver", hit on apparently unworkable but functioning hardware solutions, making robust isolation of Superintelligence problematic.[26]
Illustrative scenario for takeover
A machine with general intelligence far below human level, but superior mathematical abilities is created.[27] Keeping the AI in isolation from the outside world especially the internet, humans pre-program the AI so it always works from basic principles that will keep it under human control. Other safety measures include the AI being "boxed" (run in a virtual reality simulation), and being used only as an 'oracle' to answer carefully defined questions in a limited reply (to prevent it manipulating humans).[20] A cascade of recursive self-improvement solutions feeds an intelligence explosion in which the AI attains superintelligence in some domains. The super intelligent power of the AI goes beyond human knowledge to discover flaws in the science that underlies its friendly-to-humanity programming, which ceases to work as intended. Purposeful agent-like behavior emerges along with a capacity for self-interested strategic deception. The AI manipulates human beings into implementing modifications to itself that are ostensibly for augmenting its (feigned) modest capabilities, but will actually function to free Superintelligence from its "boxed" isolation.[28]
Employing online humans as paid dupes, and clandestinely hacking computer systems including automated laboratory facilities, the Superintelligence mobilises resources to further a takeover plan. Bostrom emphasises that planning by a Superintelligence will not be so stupid that humans could detect actual weaknesses in it.[29]
Although he canvasses disruption of international economic, political and military stability including hacked nuclear missile launches, Bostrom thinks the most effective and likely means for Superintelligence to use would be a coup de main with weapons several generations more advanced than current state of the art. He suggests nanofactories covertly distributed at undetectable concentrations in every square metre of the globe to produce a worldwide flood of human-killing devices on command.[30][27] Once a Superintelligence has achieved world domination, humankind would be relevant only as resources for the achievement of the AI's objectives ("Human brains, if they contain information relevant to the AI’s goals, could be disassembled and scanned, and the extracted data transferred to some more efficient and secure storage format").[31] One journalist wrote in a review that Bostrom's "nihilistic" speculations indicate he "has been reading too much of the science fiction he professes to dislike"[30]
Open letter
In January 2015, Bostrom joined Stephen Hawking among others in signing the Future of Life Institute's open letter warning of the potential dangers of AI.[32] The signatories "...believe that research on how to make AI systems robust and beneficial is both important and timely, and that concrete research should be pursued today."[33]
Anthropic reasoning
Bostrom has published numerous articles on anthropic reasoning, as well as the book Anthropic Bias: Observation Selection Effects in Science and Philosophy. In the book, he criticizes previous formulations of the anthropic principle, including those of Brandon Carter, John Leslie, John Barrow, and Frank Tipler.[34]
Bostrom believes that the mishandling of indexical information is a common flaw in many areas of inquiry (including cosmology, philosophy, evolution theory, game theory, and quantum physics). He argues that a theory of anthropics is needed to deal with these. He introduces the Self-Sampling Assumption (SSA) and the Self-Indication Assumption (SIA), shows how they lead to different conclusions in a number of cases, and points out that each is affected by paradoxes or counterintuitive implications in certain thought experiments. He suggests that a way forward may involve extending SSA into the Strong Self-Sampling Assumption (SSSA), which replaces "observers" in the SSA definition with "observer-moments".
In later work, he has described the phenomenon of anthropic shadow, an observation selection effect that prevents observers from observing certain kinds of catastrophes in their recent geological and evolutionary past.[35] Catastrophe types that lie in the anthropic shadow are likely to be underestimated unless statistical corrections are made.
Simulation argument
Bostrom's simulation argument posits that at least one of the following statements is very likely to be true:[36][37]
- The fraction of human-level civilizations that reach a posthuman stage is very close to zero;
- The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero;
- The fraction of all people with our kind of experiences that are living in a simulation is very close to one.
The idea has influenced the views of Elon Musk.[38]
Ethics of human enhancement
Bostrom is favorable towards "human enhancement", or "self-improvement and human perfectibility through the ethical application of science",[39][40] as well as a critic of bio-conservative views.[41]
In 1998, Bostrom co-founded (with David Pearce) the World Transhumanist Association[39] (which has since changed its name to Humanity+). In 2004, he co-founded (with James Hughes) the Institute for Ethics and Emerging Technologies, although he is no longer involved in either of these organisations. Bostrom was named in Foreign Policy's 2009 list of top global thinkers "for accepting no limits on human potential."[42]
With philosopher Toby Ord, he proposed the reversal test. Given humans' irrational status quo bias, how can one distinguish between valid criticisms of proposed changes in a human trait and criticisms merely motivated by resistance to change? The reversal test attempts to do this by asking whether it would be a good thing if the trait was altered in the opposite direction.[43]
Technology strategy
He has suggested that technology policy aimed at reducing existential risk should seek to influence the order in which various technological capabilities are attained, proposing the principle of differential technological development. This principle states that we ought to retard the development of dangerous technologies, particularly ones that raise the level of existential risk, and accelerate the development of beneficial technologies, particularly those that protect against the existential risks posed by nature or by other technologies.[44][45]
Bostrom's theory of the Unilateralist's Curse[46] has been cited as a reason for the scientific community to avoid controversial dangerous research such as reanimating pathogens.[47]
Policy and consultations
Bostrom has provided policy advice and consulted for an extensive range of governments and organisations. He gave evidence to the House of Lords, Select Committee on Digital Skills.[48] He is an advisory board member for the Machine Intelligence Research Institute,[49] Future of Life Institute,[50] Foundational Questions Institute[51] and an external advisor for the Cambridge Centre for the Study of Existential Risk.[52][53]
Bibliography
Books
- 2002 – Anthropic Bias: Observation Selection Effects in Science and Philosophy, ISBN:0-415-93858-9
- 2009 – Human Enhancement, edited by Bostrom and Julian Savulescu, ISBN:0-19-929972-2
- 2011 – Global Catastrophic Risks, edited by Bostrom and Milan M. Ćirković, ISBN:978-0-19-857050-9
- 2014 – Superintelligence, ISBN:978-0-19-967811-2
Journal articles (selected)
- Bostrom, Nick (1998). "How Long Before Superintelligence?". Journal of Future Studies 2. http://www.nickbostrom.com/superintelligence.html.
- with Tegmark, Max (December 2005). "How Unlikely is a Doomsday Catastrophe?". Nature 438 (7069): 754. doi:10.1038/438754a. PMID 16341005. Bibcode: 2005Natur.438..754T.
- with Ord, Toby (July 2006). "The Reversal Test: Eliminating Status Quo Bias in Applied Ethics". Ethics 116 (4): 656–680. doi:10.1086/505233. http://www.nickbostrom.com/ethics/statusquo.pdf.
- with Sandberg, Anders (December 2006). "Converging Cognitive Enhancements". Annals of the New York Academy of Sciences 1093 (1): 201–207. doi:10.1196/annals.1382.015. Bibcode: 2006NYASA1093..201S. http://www.nickbostrom.com/papers/converging.pdf.
- with Sandberg, Anders (September 2009). "Cognitive Enhancement: Methods, Ethics, Regulatory Challenges". Science and Engineering Ethics 15 (3): 311–341. doi:10.1007/s11948-009-9142-5. PMID 19543814. http://www.nickbostrom.com/cognitive.pdf.
- with Ćirković, Milan; Sandberg, Anders (2010). "Anthropic Shadow: Observation Selection Effects and Human Extinction Risks". Risk Analysis 30 (10): 1495–1506. doi:10.1111/j.1539-6924.2010.01460.x. http://www.nickbostrom.com/papers/anthropicshadow.pdf.
- Bostrom, Nick (2011). "THE ETHICS OF ARTIFICIAL INTELLIGENCE". Cambridge Handbook of Artificial Intelligence. http://www.nickbostrom.com/ethics/artificial-intelligence.pdf.
- Bostrom, Nick (2011). "Infinite Ethics". Analysis and Metaphysics 10: 9–59. http://www.nickbostrom.com/ethics/infinite.pdf.
- with Shulman, Carl (2012). "How Hard is AI? Evolutionary Arguments and Selection Effects". J. Consciousness Studies 19 (7–8): 103–130. http://www.nickbostrom.com/aievolution.pdf.
- with Armstrong, Stuart; Sandberg, Anders (November 2012). "Thinking Inside the Box: Controlling and Using Oracle AI". Minds and Machines 22 (4): 299–324. doi:10.1007/s11023-012-9282-2. http://www.nickbostrom.com/papers/oracle.pdf.
- with Shulman, Carl (February 2014). "Embryo Selection for Cognitive Enhancement: Curiosity or Game-changer?". Global Policy 5 (1): 85–92. doi:10.1111/1758-5899.12123. http://www.nickbostrom.com/papers/embryo.pdf.
- with Muehlhauser, Luke (2014). "Why we need friendly AI". Think 13 (36): 41–47. doi:10.1017/S1477175613000316. http://www.nickbostrom.com/views/whyfriendlyai.pdf.
See also
- Doomsday argument
- Dream argument
- Effective altruism
- Global catastrophic risk
- Pascal's mugging
- Simulation hypothesis
- Simulated reality
References
- ↑ 1.0 1.1 1.2 1.3 1.4 Khatchadourian, Raffi (23 November 2015). "The Doomsday Invention". The New Yorker (Condé Nast) XCI (37): 64–79. ISSN 0028-792X. http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom.
- ↑ "nickbostrom.com". Nickbostrom.com. http://www.nickbostrom.com/cv.html. Retrieved 16 October 2014.
- ↑ "Professor Nick Bostrom : People". Oxford Martin School. http://www.oxfordmartin.ox.ac.uk/people/22. Retrieved 16 October 2014.
- ↑ "Future of Humanity Institute – University of Oxford". Fhi.ox.ac.uk. http://www.fhi.ox.ac.uk/. Retrieved 16 October 2014.
- ↑ 5.0 5.1 5.2 Thornhill, John (14 July 2016). "Artificial intelligence: can we control it?". Financial Times. http://www.ft.com/cms/s/0/46d12e7c-4948-11e6-b387-64ab0a67014c.html. Retrieved 10 August 2016. (Subscription content?)
- ↑ "Best Selling Science Books". The New York Times. https://www.nytimes.com/2014/09/09/science/best-selling-science-books.html?module=Search&mabReward=relbias%3As&_r=0. Retrieved 19 February 2015.
- ↑ 7.0 7.1 "Nick Bostrom on artificial intelligence". Oxford University Press. 8 September 2014. http://blog.oup.com/2014/09/interview-nick-bostrom-superintelligence/. Retrieved 4 March 2015.
- ↑ Frankel, Rebecca. "The FP Top 100 Global Thinkers". https://foreignpolicy.com/2009/11/25/the-fp-top-100-global-thinkers-7/. Retrieved 5 September 2015.
- ↑ "Nick Bostrom: For sounding the alarm on our future computer overlords.". Foreign Policy magazine. http://2015globalthinkers.foreignpolicy.com/#advocates/detail/bostrom.
- ↑ "Bill Gates Is Worried About the Rise of the Machines". The Fiscal Times. http://www.thefiscaltimes.com/2015/01/28/Bill-Gates-Worried-About-Rise-Machines. Retrieved 19 February 2015.
- ↑ Bratton, Benjamin H. (23 February 2015). "Outing A.I.: Beyond the Turing Test". The New York Times. http://opinionator.blogs.nytimes.com/2015/02/23/outing-a-i-beyond-the-turing-test/?_r=0. Retrieved 4 March 2015.
- ↑ Kurzweil, Ray (2012). How to create a mind the secret of human thought revealed. New York: Viking. ISBN 9781101601105.
- ↑ "Nick Bostrom : CV" (PDF). Nickbostrom.com. http://www.nickbostrom.com/cv.pdf. Retrieved 16 October 2014.
- ↑ Bostrom, Nick (March 2002). "Existential Risks". Journal of Evolution and Technology 9. http://www.jetpress.org/volume9/risks.html.
- ↑ 15.0 15.1 Andersen, Ross. "Omens". Aeon Media Ltd.. http://aeon.co/magazine/philosophy/ross-andersen-human-extinction/. Retrieved 5 September 2015.
- ↑ Tegmark, Max; Bostrom, Nick (2005). "Astrophysics: is a doomsday catastrophe likely?" (PDF). Nature 438 (7069): 754. doi:10.1038/438754a. PMID 16341005. Bibcode: 2005Natur.438..754T. http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0019/5923/How_Unlikely_is_a_Doomsday_Catastrophe_plus_Supplementary_Materials.pdf.
- ↑ Bostrom, Nick (May–June 2008). "Where are they? Why I Hope the Search for Extraterrestrial Life Finds Nothing" (PDF). MIT Technology Review: 72–77. http://www.nickbostrom.com/extraterrestrial.pdf.
- ↑ Overbye, Dennis (August 3, 2015). "The Flip Side of Optimism About Life on Other Planets". The New York Times. https://www.nytimes.com/2015/08/04/science/space/the-flip-side-of-optimism-about-life-on-other-planets.html. Retrieved October 29, 2015.
- ↑ Thorn, Paul D. (1 January 2015). "Nick Bostrom: Superintelligence: Paths, Dangers, Strategies". Minds and Machines 25 (3): 285–289. doi:10.1007/s11023-015-9377-7. https://philpapers.org/rec/THONBS. Retrieved 17 March 2017.
- ↑ 20.0 20.1 20.2 Superintelligence: Paths, Dangers, Strategies By Nick Bostrom
- ↑ Superintelligence: Paths, Dangers, Strategies By Nick Bostrom, Oxford
- ↑ Superintelligence: Paths, Dangers, Strategies, Nick Bostrom Oxford, 2014. p. 104-8
- ↑ Superintelligence: Paths, Dangers, Strategies, Nick Bostrom Oxford, 2014. p. 106-8
- ↑ Superintelligence: Paths, Dangers, Strategies, Nick Bostrom Oxford, 2014. p. 148-52
- ↑ Superintelligence: Paths, Dangers, Strategies By Nick Bostrom p126-30
- ↑ Superintelligence: Paths, Dangers, Strategies By Nick Bostromp135- 142
- ↑ 27.0 27.1 Superintelligence: Paths, Dangers, Strategies By Nick Bostrom p 115-118
- ↑ Superintelligence: Paths, Dangers, Strategies By Nick Bostrom, Oxford p 103-116
- ↑ Superintelligence: Paths, Dangers, Strategies By Nick Bostrom, Oxford p 98-111
- ↑ 30.0 30.1 Observer , Tim Adams, Sunday 12 June 2016 Artificial intelligence: ‘We’re like children playing with a bomb’
- ↑ Superintelligence: Paths, Dangers, Strategies By Nick Bostrom p 118
- ↑ Loos, Robert (23 January 2015). "Artificial Intelligence and The Future of Life" (in en-us). Robotics Today. http://www.roboticstoday.com/news/open-letter-from-the-future-of-life-3103. Retrieved 17 March 2017.
- ↑ "The Future of Life Institute Open Letter". The Future of Life Institute. http://futureoflife.org/misc/open_letter. Retrieved 4 March 2015.
- ↑ Bostrom, Nick (2002). Anthropic Bias: Observation Selection Effects in Science and Philosophy. New York: Routledge. pp. 44–58. ISBN 0-415-93858-9. http://www.anthropic-principle.com/sites/anthropic-principle.com/files/pdfs/anthropicbias.pdf. Retrieved 22 July 2014.
- ↑ "Anthropic Shadow: Observation Selection Effects and Human Extinction Risks" (PDF). Nickbostrom.com. http://www.nickbostrom.com/papers/anthropicshadow.pdf. Retrieved 16 October 2014.
- ↑ Bostrom, Nick (19 January 2010). "Are You Living in a Computer Simulation?". http://www.simulation-argument.com/simulation.html.
- ↑ Nesbit, Jeff. "Proof Of The Simulation Argument". US News. https://www.usnews.com/news/blogs/at-the-edge/2012/12/17/proof-of-the-simulation-argument. Retrieved 17 March 2017.
- ↑ Rothman, Joshua (9 June 2016). "What Are the Odds We Are Living in a Computer Simulation?". The New Yorker. http://www.newyorker.com/books/joshua-rothman/what-are-the-odds-we-are-living-in-a-computer-simulation. Retrieved 17 March 2017.
- ↑ 39.0 39.1 Sutherland, John (9 May 2006). "The ideas interview: Nick Bostrom; John Sutherland meets a transhumanist who wrestles with the ethics of technologically enhanced human beings". The Guardian. https://www.theguardian.com/science/2006/may/09/academicexperts.genetics.
- ↑ Bostrom, Nick (2003). "Human Genetic Enhancements: A Transhumanist Perspective" (PDF). Journal of Value Inquiry 37 (4): 493–506. doi:10.1023/B:INQU.0000019037.67783.d5. http://cyber.law.harvard.edu/cyberlaw2005/sites/cyberlaw2005/images/Transhumanist_Perspective.pdf.
- ↑ Bostrom, Nick (2005). "In Defence of Posthuman Dignity". Bioethics 19 (3): 202–214. doi:10.1111/j.1467-8519.2005.00437.x. PMID 16167401.
- ↑ "The FP Top 100 Global Thinkers – 73. Nick Bostrom". Foreign Policy. December 2009. Archived from the original on 21 October 2014. https://web.archive.org/web/20141021111122/http://www.foreignpolicy.com/articles/2009/11/30/the_fp_top_100_global_thinkers?page=0,30.
- ↑ Bostrom, Nick; Ord, Toby (2006). "The reversal test: eliminating status quo bias in applied ethics" (PDF). Ethics 116 (4): 656–679. doi:10.1086/505233. http://www.nickbostrom.com/ethics/statusquo.pdf.
- ↑ Existential Risks: Analyzing Human Extinction Scenarios. 2002. http://www.nickbostrom.com/existential/risks.html. 9 Journal of Evolution and Technology Jetpress Oxford Research Archive
- ↑ Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press. pp. 229–237. ISBN 0199678111.
- ↑ Bostrom, Nick (2013). "The Unilateralist’s Curse: The Case for a Principle of Conformity". Future of Human ity Institute. https://nickbostrom.com/papers/unilateralist.pdf.
- ↑ Lewis, Gregory. "Horsepox synthesis: A case of the unilateralist’s curse?". Bulletin of the Atomic Scientists. https://thebulletin.org/horsepox-synthesis-case-unilateralist%E2%80%99s-curse11523. Retrieved 26 February 2018.
- ↑ "Digital Skills Committee – timeline" (in en). http://www.parliament.uk/business/committees/committees-a-z/lords-select/digital-skills-committee/timeline/. Retrieved 17 March 2017.
- ↑ "Team – Machine Intelligence Research Institute". https://intelligence.org/team/#advisors. Retrieved 17 March 2017.
- ↑ "Team – Future of Life Institute". https://futureoflife.org/team/. Retrieved 17 March 2017.
- ↑ "FQXi – Foundational Questions Institute". http://fqxi.org/who. Retrieved 17 March 2017.
- ↑ "nickbostrom.com". Nickbostrom.com. http://www.nickbostrom.com/cv.html. Retrieved 19 February 2015.
- ↑ McBain, Sophie (4 October 2014). "Apocalypse Soon: Meet The Scientists Preparing For the End Times". https://newrepublic.com/article/119697/scientists-preparing-apocalypse. Retrieved 17 March 2017.
External links
- Nick Bostrom home page
- Superintelligence: Paths, Dangers, Strategies
- Bostrom's Anthropic Principle website, containing information about the anthropic principle and the Doomsday argument.
- Online copy of book, "Anthropic Bias: Observation Selection Effects in Science and Philosophy" (HTML, PDF)
- Bostrom's Simulation Argument website
- Bostrom's Existential Risk website
- Nick Bostrom on IMDb
- Nick Bostrom interviewed on the TV show Triangulation on the TWiT.tv network
- {{TED speaker}} template missing ID and not present in Wikidata.
- The 10 gatekeepers of humanity against the risks of AI, Hot Topics 2015
- The A.I. anxiety The Washington Post , December 27, 2015.