Biography:David A. McAllester
David A. McAllester | |
---|---|
Born | United States | May 30, 1956
Alma mater | Massachusetts Institute of Technology |
Known for | Artificial intelligence |
Awards | AAAI Classic Paper Award (2010)[1] International Conference on Logic Programming Test of Time award (2014)[2] |
Scientific career | |
Fields | Computer Science, Artificial Intelligence, Machine Learning |
Institutions | Massachusetts Institute of Technology Toyota Technological Institute at Chicago |
Doctoral advisor | Gerald Sussman |
David A. McAllester (born May 30, 1956) is an American computer scientist who is Professor and former chief academic officer at the Toyota Technological Institute at Chicago. He received his B.S., M.S. and Ph.D. degrees from the Massachusetts Institute of Technology in 1978, 1979 and 1987 respectively. His PhD was supervised by Gerald Sussman. He was on the faculty of Cornell University for the academic year 1987-1988 and on the faculty of MIT from 1988 to 1995. He was a member of technical staff at AT&T Labs-Research from 1995 to 2002. He has been a fellow of the American Association of Artificial Intelligence since 1997.[3] He has written over 100 refereed publications.
McAllester's research areas include machine learning theory, the theory of programming languages, automated reasoning, AI planning, computer game playing (computer chess) and computational linguistics. A 1991 paper on AI planning[4] proved to be one of the most influential papers of the decade in that area.[5] A 1993 paper on computer game algorithms[6] influenced the design of the algorithms used in the Deep Blue chess system that defeated Garry Kasparov.[7] A 1998 paper on machine learning theory[8] introduced PAC-Bayesian theorems which combine Bayesian and non-Bayesian methods.
Opinions on artificial intelligence
McAllester has voiced concerns about the potential dangers of artificial intelligence, writing in an article to the Pittsburgh Tribune-Review that it is inevitable that fully automated intelligent machines will be able to design and build smarter, better versions of themselves, an event known as the singularity. The singularity would enable machines to become infinitely intelligent, and would pose an "incredibly dangerous scenario". McAllester estimates a 10 percent probability of the Singularity occurring within 25 years, and a 90 percent probability of it occurring within 75 years.[9] He appeared on the AAAI Presidential Panel on Long-Term AI Futures in 2009:,[10] and considers the dangers of superintelligent AI worth taking seriously:
I am uncomfortable saying that we are ninety-nine per cent certain that we are safe for fifty years... That feels like hubris to me.[11]
He was later described as discussing the singularity at the panel in terms of two major milestones in artificial intelligence:
1) Operational Sentience: We can easily converse with computers. 2) The AI Chain Reaction: A computer that boot straps itself to a better self. Repeat.[12]
McAllester has also written on friendly artificial intelligence on his blog. He says that before machines become capable of programming themselves (potentially leading to the singularity), there should be a period where they are moderately intelligent in which it should be possible to test out giving them a purpose or mission that should render them safe to humans:
I personally believe that it is likely that within a decade agents will be capable of compelling conversation about the everyday events that are the topics of non-technical dinner conversations. I think this will happen long before machines can program themselves leading to an intelligence explosion. The early stages of artificial general intelligence (AGI) will be safe. However, the early stages of AGI will provide an excellent test bed for the servant mission or other approaches to friendly AI ... If there is a coming era of safe (not too intelligent) AGI then we will have time to think further about later more dangerous eras.[13]
References
- ↑ "AAAI Classic Paper Award". AAAI. 2016. http://www.aaai.org/Awards/classic.php.
- ↑ "Pascal's paper stands the test of time". Australian National University. 23 April 2014. https://cs.anu.edu.au/news/pascals-paper-stands-test-time.
- ↑ "David McAllester biography". Toyota Technological Institute at Chicago. http://ttic.uchicago.edu/~dmcallester/bio.html.
- ↑ McAllester, David; Rosenblitt, David (December 1991). "Systematic Nonlinear Planning". Proceedings AAAI-91 (AAAI): 634–639. http://www.aaai.org/Papers/AAAI/1991/AAAI91-099.pdf. Retrieved 19 August 2016.
- ↑ "Google Scholar Citations". 2016. https://scholar.google.com/citations?view_op=view_citation&hl=en&user=nbpafUkAAAAJ&citation_for_view=nbpafUkAAAAJ:u5HHmVD_uO8C.
- ↑ McAllester, David; Yuret, Deniz (20 October 1993). Alpha-Beta-Conspiracy Search. Draft.
- ↑ Campbell, Murray S.; Joseph Hoane, Jr., A.; Hsu, Feng-hsiung (1999). "Search Control Methods in Deep Blue". AAAI Technical Report SS-99-07 (AAAI): 19–23. http://aaaipress.org/Papers/Symposia/Spring/1999/SS-99-07/SS99-07-004.pdf. Retrieved 16 August 2016. "To the best of our knowledge, the idea of separating the white and black depth computation was first suggested by David McAllester. A later paper (McAllester and Yuret 1993) derived an algorithm, ABC, from conspiracy theory (McAllester 1988).".
- ↑ McAllester, David (1998). "Some PAC-Bayesian theorems". Proceedings of the eleventh annual conference on Computational learning theory - COLT' 98. Association for Computing Machinery. pp. 230–234. doi:10.1145/279943.279989. ISBN 978-1581130577. http://dl.acm.org/citation.cfm?id=279989. Retrieved 19 August 2016.
- ↑ Cronin, Mike (2 November 2009). "Futurists' report reviews dangers of smart robots". Pittsburgh Tribune-Review. http://triblive.com/x/pittsburghtrib/news/s_651056.html.
- ↑ "Asilomar Meeting on Long-Term AI Futures". 2009. http://research.microsoft.com/en-us/um/people/horvitz/AAAI_Presidential_Panel_2008-2009.htm.
- ↑ Khatchadourian, Raffi (23 November 2015). "The Doomsday Invention: Will artificial intelligence bring us utopia or destruction?". The New Yorker. http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom. Retrieved 23 August 2016.
- ↑ Fortnow, Lance (31 July 2009). "The Singularity". http://blog.computationalcomplexity.org/2009/07/singularity.html.
- ↑ McAllester, David (10 August 2014). "Friendly AI and the Servant Mission". WordPress. https://machinethoughts.wordpress.com/2014/08/10/friendly-ai-and-the-servant-mission/.
External links
- David McAllester's academic page at TTIC.
- Machine Thoughts, David McAllester's personal blog.
- David Allen McAllester at the Mathematics Genealogy Project.
Original source: https://en.wikipedia.org/wiki/David A. McAllester.
Read more |