Biography:Dan Hendrycks

From HandWiki
Dan Hendrycks
Hendrycks in 2025
Born1994/1995 (age 30–31)
EducationUniversity of Chicago (B.S., 2018)
UC Berkeley (Ph.D., 2022)
Scientific career
Fields
InstitutionsUC Berkeley
Center for AI Safety

Dan Hendrycks (born 1994/1995[1]) is an American machine learning researcher. He serves as the director of the Center for AI Safety, a nonprofit organization based in San Francisco, California.

Early life and education

Hendrycks was raised in a Christian evangelical household in Marshfield, Missouri.[2][3] He received a B.S. from the University of Chicago in 2018 and a Ph.D. from the University of California, Berkeley in Computer Science in 2022.[4]

Career and research

Hendrycks' research focuses on topics that include machine learning safety, machine ethics, and robustness.

He credits his participation in the effective altruism (EA) movement-linked 80,000 Hours program for his career focus towards AI safety, though denies being an advocate for EA.[2]

Hendrycks is the main author of the research paper that introduced the activation function GELU in 2016,[5] and of the paper that introduced the language model benchmark MMLU (Massive Multitask Language Understanding) in 2020.[6][7]

In February 2022, Hendrycks co-authored recommendations for the US National Institute of Standards and Technology (NIST) to inform the management of risks from artificial intelligence.[8][9]

In September 2022, Hendrycks wrote a paper providing a framework for analyzing the impact of AI research on societal risks.[10][11] He later published a paper in March 2023 examining how natural selection and competitive pressures could shape the goals of artificial agents.[12][13][14] This was followed by "An Overview of Catastrophic AI Risks", which discusses four categories of risks: malicious use, AI race dynamics, organizational risks, and rogue AI agents.[15][16]

Hendrycks is the safety adviser of xAI, an AI startup company founded by Elon Musk in 2023. To avoid any potential conflicts of interest, he receives a symbolic one-dollar salary and holds no company equity.[1][17] In November 2024, he also joined Scale AI as an advisor collecting a one-dollar salary.[18] Hendrycks is the creator of Humanity's Last Exam, a benchmark for evaluating the capabilities of large language models, which he developed in collaboration with Scale AI.[19][20]

In 2024 Hendrycks published a 568 page book entitled "Introduction to AI Safety, Ethics, and Society" based on courseware he had previously developed.[21]

Selected publications

  • Hendrycks, Dan; Gimpel, Kevin (2020-07-08). "Gaussian Error Linear Units (GELUs)". arXiv:1606.08415 [cs.LG].
  • Hendrycks, Dan; Gimpel, Kevin (2018-10-03). "A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks". International Conference on Learning Representations 2017. 
  • Hendrycks, Dan; Mazeika, Mantas; Dietterich, Thomas (2019-01-28). "Deep Anomaly Detection with Outlier Exposure". International Conference on Learning Representations 2019. 
  • Hendrycks, Dan; Mazeika, Mantas; Zou, Andy (2021-10-25). "What Would Jiminy Cricket Do? Towards Agents That Behave Morally". Conference on Neural Information Processing Systems 2021. 

References

  1. 1.0 1.1 Henshall, Will (September 7, 2023). "Time 100 AI: Dan Hendrycks". Time. https://time.com/collection/time100-ai/6309050/dan-hendrycks/. 
  2. 2.0 2.1 Scharfenberg, David (July 6, 2023). "Dan Hendrycks wants to save us from an AI catastrophe. He's not sure he'll succeed.". The Boston Globe. https://www.bostonglobe.com/2023/07/06/opinion/ai-safety-human-extinction-dan-hendrycks-cais/. 
  3. Castaldo, Joe (June 23, 2023). "'I hope I'm wrong': Why some experts see doom in AI". The Globe and Mail. https://www.theglobeandmail.com/business/article-i-hope-im-wrong-why-some-experts-see-doom-in-ai/. 
  4. "Dan Hendrycks". https://people.eecs.berkeley.edu/~hendrycks/. 
  5. Hendrycks, Dan; Gimpel, Kevin (2023-06-06), Gaussian Error Linear Units (GELUs) 
  6. Hendrycks, Dan; Burns, Collin; Basart, Steven; Zou, Andy; Mazeika, Mantas; Song, Dawn; Steinhardt, Jacob (2021-01-12), Measuring Massive Multitask Language Understanding 
  7. Roose, Kevin (2024-04-15). "A.I. Has a Measurement Problem" (in en-US). The New York Times. ISSN 0362-4331. https://www.nytimes.com/2024/04/15/technology/ai-models-measurement.html. 
  8. "Nvidia moves into A.I. services and ChatGPT can now use your credit card" (in en). https://fortune.com/2023/03/28/nvidia-moves-into-a-i-services-and-chatgpt-can-now-use-your-credit-card/. 
  9. "Request for Information to the Update of the National Artificial Intelligence Research and Development Strategic Plan: Responses". March 2022. https://www.ai.gov/rfi/2022/87-FR-5876/NAIRDSP-RFI-2022-Newman-UC-Berkley.pdf. 
  10. Hendrycks, Dan; Mazeika, Mantas (2022-06-13). "X-Risk Analysis for AI Research". arXiv:2206.05862v7 [cs.CY].
  11. Gendron, Will. "An AI safety expert outlined a range of speculative doomsday scenarios, from weaponization to power-seeking behavior" (in en-US). https://www.businessinsider.com/ai-safety-expert-research-speculates-dangers-doomsday-scenarios-weaponization-deception-2023-4. 
  12. Hendrycks, Dan (2023-03-28). "Natural Selection Favors AIs over Humans". arXiv:2303.16200 [cs.CY].
  13. Colton, Emma (2023-04-03). "AI could go 'Terminator,' gain upper hand over humans in Darwinian rules of evolution, report warns" (in en-US). https://www.foxnews.com/tech/ai-could-go-terminator-gain-upper-hand-over-humans-in-darwinian-rules-of-evolution-expert-warns. 
  14. Klein, Ezra (2023-04-07). "Why A.I. Might Not Take Your Job or Supercharge the Economy" (in en-US). The New York Times. https://www.nytimes.com/2023/04/07/opinion/ezra-klein-podcast-ama-april2023.html. 
  15. Hendrycks, Dan; Mazeika, Mantas; Woodside, Thomas (2023). "An Overview of Catastrophic AI Risks". arXiv:2306.12001 [cs.CY].
  16. Scharfenberg, David (July 6, 2023). "Dan Hendrycks wants to save us from an AI catastrophe. He's not sure he'll succeed.". The Boston Globe. https://www.bostonglobe.com/2023/07/06/opinion/ai-safety-human-extinction-dan-hendrycks-cais/. 
  17. Lovely, Garrison (January 22, 2024). "Can Humanity Survive AI?". Jacobin. https://jacobin.com/2024/01/can-humanity-survive-ai. 
  18. Goldman, Sharon (2024-11-14). "Elon Musk's xAI safety whisperer just became an advisor to Scale AI". Fortune. https://fortune.com/2024/11/13/scale-ai-dan-hendrycks-elon-musk-xai-safety-trump-ties/. 
  19. Roose, Kevin (2025-01-23). "When A.I. Passes This Test, Look Out" (in en-US). The New York Times. ISSN 0362-4331. https://www.nytimes.com/2025/01/23/technology/ai-test-humanitys-last-exam.html. 
  20. Dastin, Jeffrey; Paul, Katie (2024-09-16). "AI experts ready 'Humanity's Last Exam' to stump powerful tech". Reuters. https://www.reuters.com/technology/artificial-intelligence/ai-experts-ready-humanitys-last-exam-stump-powerful-tech-2024-09-16/. 
  21. "AI Safety, Ethics, and Society Textbook". https://www.aisafetybook.com/.