Biography:Dan Hendrycks

From HandWiki
Short description: American machine learning researcher
Dan Hendrycks
EducationUniversity of Chicago (B.S., 2018)
UC Berkeley (Ph.D., 2022)
Scientific career
Fields
InstitutionsUC Berkeley
Center for AI Safety

Dan Hendrycks is an American machine learning researcher. He serves as the director of the Center for AI Safety.

Education

Hendrycks received a B.S. from the University of Chicago in 2018 and a Ph.D. from the University of California, Berkeley in Computer Science in 2022.[1]

Career and research

Hendrycks' research focuses on topics that include machine learning safety, machine ethics, and robustness.

In February 2022, Hendrycks co-authored recommendations for the US National Institute of Standards and Technology (NIST) to inform the management of risks from artificial intelligence.[2][3]

In September 2022, Hendrycks wrote a paper providing a framework for analyzing the impact of AI research on societal risks.[4][5] He later published a paper in March 2023 examining how natural selection and competitive pressures could shape the goals of artificial agents.[6][7][8] This was followed by "An Overview of Catastrophic AI Risks", which discusses four categories of risks: malicious use, AI race dynamics, organizational risks, and rogue AI agents.[9][10]

Selected publications

  • Hendrycks, Dan; Gimpel, Kevin (2020-07-08). "Gaussian Error Linear Units (GELUs)". arXiv:1606.08415 [cs.LG].
  • Hendrycks, Dan; Gimpel, Kevin (2018-10-03). "A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks". International Conference on Learning Representations 2017. 
  • Hendrycks, Dan; Mazeika, Mantas; Dietterich, Thomas (2019-01-28). "Deep Anomaly Detection with Outlier Exposure". International Conference on Learning Representations 2019. 
  • Hendrycks, Dan; Mazeika, Mantas; Zou, Andy (2021-10-25). "What Would Jiminy Cricket Do? Towards Agents That Behave Morally". Conference on Neural Information Processing Systems 2021. 

References



  1. "Dan Hendrycks". https://people.eecs.berkeley.edu/~hendrycks/. 
  2. "Nvidia moves into A.I. services and ChatGPT can now use your credit card" (in en). https://fortune.com/2023/03/28/nvidia-moves-into-a-i-services-and-chatgpt-can-now-use-your-credit-card/. 
  3. "Request for Information to the Update of the National Artificial Intelligence Research and Development Strategic Plan: Responses". March 2022. https://www.ai.gov/rfi/2022/87-FR-5876/NAIRDSP-RFI-2022-Newman-UC-Berkley.pdf. 
  4. Hendrycks, Dan; Mazeika, Mantas (2022-06-13). "X-Risk Analysis for AI Research". arXiv:2206.05862v7 [cs.CY].
  5. Gendron, Will. "An AI safety expert outlined a range of speculative doomsday scenarios, from weaponization to power-seeking behavior" (in en-US). https://www.businessinsider.com/ai-safety-expert-research-speculates-dangers-doomsday-scenarios-weaponization-deception-2023-4. 
  6. Hendrycks, Dan (2023-03-28). "Natural Selection Favors AIs over Humans". arXiv:2303.16200 [cs.CY].
  7. Colton, Emma (2023-04-03). "AI could go 'Terminator,' gain upper hand over humans in Darwinian rules of evolution, report warns" (in en-US). https://www.foxnews.com/tech/ai-could-go-terminator-gain-upper-hand-over-humans-in-darwinian-rules-of-evolution-expert-warns. 
  8. Klein, Ezra (2023-04-07). "Why A.I. Might Not Take Your Job or Supercharge the Economy" (in en-US). The New York Times. https://www.nytimes.com/2023/04/07/opinion/ezra-klein-podcast-ama-april2023.html. 
  9. Hendrycks, Dan; Mazeika, Mantas; Woodside, Thomas (2023). "An Overview of Catastrophic AI Risks". arXiv:2306.12001 [cs.CY].
  10. Scharfenberg, David (July 6, 2023). "Dan Hendrycks wants to save us from an AI catastrophe. He's not sure he'll succeed.". The Boston Globe. https://www.bostonglobe.com/2023/07/06/opinion/ai-safety-human-extinction-dan-hendrycks-cais/.