Biography:Himabindu Lakkaraju

From HandWiki
Revision as of 05:05, 9 February 2024 by Unex (talk | contribs) (fixing)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: Indian-American computer scientist
Himabindu Lakkaraju
Alma materIndian Institute of Science
Stanford University
Scientific career
InstitutionsUniversity of Chicago
IBM Research
Microsoft Research
Harvard University
ThesisHuman-centric machine learning : enabling machine learning for high-stakes decision-making (2018)
Doctoral advisorJure Leskovec

Himabindu "Hima" Lakkaraju is an Indian-American computer scientist who works on machine learning, artificial intelligence, algorithmic bias, and AI accountability. She is currently an Assistant Professor at the Harvard Business School and is also affiliated with the Department of Computer Science at Harvard University. Lakkaraju is known for her work on explainable machine learning. More broadly, her research focuses on developing machine learning models and algorithms that are interpretable, transparent, fair, and reliable. She also investigates the practical and ethical implications of deploying machine learning models in domains involving high-stakes decisions such as healthcare, criminal justice, business, and education. Lakkaraju was named as one of the world's top Innovators Under 35 by both Vanity Fair and the MIT Technology Review.

She is also known for her efforts to make the field of machine learning more accessible to the general public. Lakkaraju co-founded the Trustworthy ML Initiative (TrustML) to lower entry barriers and promote research on interpretability, fairness, privacy, and robustness of machine learning models.[1] She has also developed several tutorials[2][3][4][5] and a full-fledged course on the topic of explainable machine learning.[6]

Early life and education

Lakkaraju obtained a masters degree in computer science from the Indian Institute of Science in Bangalore. As part of her masters thesis, she worked on probabilistic graphical models and developed semi-supervised topic models which can be used to automatically extract sentiment and concepts from customer reviews.[7][8] This work was published at the SIAM International Conference on Data Mining, and won the Best Research Paper Award at the conference.[9]

She then spent two years as a research engineer at IBM Research, India in Bangalore before moving to Stanford University to pursue her PhD in computer science. Her doctoral thesis was advised by Jure Leskovec. She also collaborated with Jon Kleinberg, Cynthia Rudin, and Sendhil Mullainathan during her PhD. Her doctoral research focused on developing interpretable and fair machine learning models that can complement human decision making in domains such as healthcare, criminal justice, and education.[10] This work was awarded the Microsoft Research Dissertation Grant[11] and the INFORMS Best Data Mining Paper prize.[12]

During her PhD, Lakkaraju spent a summer working as a research fellow at the Data Science for Social Good program at University of Chicago. As part of this program, she collaborated with Rayid Ghani to develop machine learning models which can identify at-risk students and also prescribe appropriate interventions. This research was leveraged by schools in Montgomery County, Maryland.[13] Lakkaraju also worked as a research intern and visiting researcher at Microsoft Research, Redmond during her PhD. She collaborated with Eric Horvitz at Microsoft Research to develop human-in-the-loop algorithms for identifying blind spots of machine learning models.[14]

Research and career

Lakkaraju's doctoral research focused on developing and evaluating interpretable, transparent, and fair predictive models which can assist human decision makers (e.g., doctors, judges) in domains such as healthcare, criminal justice, and education.[10] As part of her doctoral thesis, she developed algorithms for automatically constructing interpretable rules for classification[15] and other complex decisions which involve trade-offs.[16] Lakkaraju and her co-authors also highlighted the challenges associated with evaluating predictive models in settings with missing counterfactuals and unmeasured confounders, and developed new computational frameworks for addressing these challenges.[17][18] She co-authored a study which demonstrated that when machine learning models are used to assist in making bail decisions, they can help reduce crime rates by up to 24.8% without exacerbating racial disparities.[18][19]

Lakkaraju joined Harvard University as a postdoctoral researcher in 2018, and then became an assistant professor at the Harvard Business School and the Department of Computer Science at Harvard University in 2020.[20][21] Over the past few years, she has done pioneering work in the area of explainable machine learning. She initiated the study of adaptive and interactive post hoc explanations[22][23] which can be used to explain the behavior of complex machine learning models in a manner that is tailored to user preferences.[24][25] She and her collaborators also made one of the first attempts at identifying and formalizing the vulnerabilities of popular post hoc explanation methods.[26] They demonstrated how adversaries can game popular explanation methods, and elicit explanations that hide undesirable biases (e.g., racial or gender biases) of the underlying models. Lakkaraju also co-authored a study which demonstrated that domain experts may not always interpret post hoc explanations correctly, and that adversaries could exploit post hoc explanations to manipulate experts into trusting and deploying biased models.[27]

She also worked on improving the reliability of explanation methods. She and her collaborators developed novel theory[28] and methods[29][30] to analyze and improve the robustness of different classes of post hoc explanation methods by proposing a unified theoretical framework and establishing the first known connections between explainability and adversarial training. Lakkaraju has also made important research contributions to the field of algorithmic recourse. She and her co-authors developed one of the first methods which allows decision makers to vet predictive models thoroughly to ensure that the recourse provided is meaningful and non-discriminatory.[25] Her research has also highlighted critical flaws in several popular approaches in the literature of algorithmic recourse.[31]

Trustworthy ML Initiative (TrustML)

In 2020, Lakkaraju co-founded the Trustworthy ML Initiative (TrustML) to democratize and promote research in the field of trustworthy machine learning which broadly encompasses interpretability, fairness, privacy, and robustness of machine learning models.[1] This initiative aims to enable easy access of fundamental resources to newcomers in the field, provide a platform for early career researchers to showcase their work, and more broadly develop a community of researchers and practitioners working on topics related to trustworthy ML.

Lakkaraju has developed several tutorials[2][3][4][5] and a full-fledged course on explainable machine learning[6] as part of this initiative.

Awards and honors

External links

A course on "Interpretability and Explainability in Machine Learning", 2019

NeurIPS conference tutorial on "Explaining Machine Learning Predictions: State-of-the-art, Challenges, and Opportunities", 2020

AAAI conference tutorial on "Explaining Machine Learning Predictions: State-of-the-art, Challenges, and Opportunities", 2021

CHIL conference tutorial on "Explainable ML: Understanding the Limits and Pushing the Boundaries", 2021

Selected publications

References

  1. 1.0 1.1 "Trustworthy ML" (in en-US). https://www.trustworthyml.org/home. 
  2. 2.0 2.1 "NeurIPS 2020 Tutorial on Explainable ML". https://explainml-tutorial.github.io/neurips20. 
  3. 3.0 3.1 "AAAI 2021 Tutorial on Explainable ML". https://explainml-tutorial.github.io/aaai21. 
  4. 4.0 4.1 "FAccT 2021 Tutorial on Explainable ML in the Wild". https://facctconference.org/2021/acceptedtuts.html#Explainable_ML. 
  5. 5.0 5.1 "CHIL Conference 2021Tutorial on Limits of Explainable ML". https://www.chilconference.org/tutorial_T04.html. 
  6. 6.0 6.1 "A Course on Interpretability and Explainability in ML". https://interpretable-ml-class.github.io/. 
  7. Lakkaraju, Himabindu; Bhattacharyya, Chiranjib; Bhattacharya, Indrajit; Merugu, Srujana (2011). "Proceedings of the 2011 SIAM International Conference on Data Mining". pp. 498–509. doi:10.1137/1.9781611972818.43. ISBN 978-0-89871-992-5. http://eprints.iisc.ac.in/46014/1/siam_int_con_dat_min_498_2011.pdf. 
  8. "Indian Institute of Science" (in en). https://iisc.ac.in/events/alumna-himabindu-lakkaraju-has-been-featured-in-the-mit-technology-reviews-35-innovators-under-35-for-her-research-on-using-ai-for-social-good/. 
  9. "SIAM SDM Best Paper Award". https://mllab.csa.iisc.ac.in/awards/. 
  10. 10.0 10.1 "Human-Centric Machine Learning: Enabling Machine Learning for High-Stakes Decision-Making | Computer Science". https://cse.ucsd.edu/about/human-centric-machine-learning-enabling-machine-learning-high-stakes-decision-making. 
  11. 11.0 11.1 "Microsoft Research Dissertation Grant Winners". June 27, 2017. https://www.microsoft.com/en-us/research/blog/dissertation-grant-program-winners/. 
  12. 12.0 12.1 "Curriculum Vitae, Lakkaraju". https://himalakkaraju.github.io/HimaCV.pdf. 
  13. "Himabindu Lakkaraju" (in en). https://www.technologyreview.com/innovator/himabindu-lakkaraju/. 
  14. Lakkaraju, Himabindu; Kamar, Ece; Caruana, Rich; Horvitz, Eric (2016). "Identifying unknown unknowns in the open world: representations and policies for guided exploration". AAAI Conference on Artificial Intelligence: 2124–2132. https://dl.acm.org/doi/10.5555/3298483.3298546. 
  15. Lakkaraju, Himabindu; Bach, Stephen H.; Leskovec, Jure (August 1, 2016). "Interpretable Decision Sets: A Joint Framework for Description and Prediction" (in English). Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2016. pp. 1675–1684. doi:10.1145/2939672.2939874. ISBN 9781450342322. https://www.wikidata.org/wiki/Q41880399. 
  16. "Learning Cost-Effective and Interpretable Treatment Regimes". International Conference on Artificial Intelligence and Statistics (AISTATS). http://proceedings.mlr.press/v54/lakkaraju17a/lakkaraju17a.pdf. 
  17. Lakkaraju, H.; Kleinberg, J.; Leskovec, J.; Ludwig, J.; Mullainathan, S. (2017). "The Selective Labels Problem: Evaluating Algorithmic Predictions in the Presence of Unobservables". Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2017. pp. 275–284. doi:10.1145/3097983.3098066. ISBN 9781450348874. 
  18. 18.0 18.1 Kleinberg, Jon Michael; Lakkaraju, Himabindu; Leskovec, Jure; Ludwig, Jens; Mullainathan, Sendhil (February 1, 2017). "Human Decisions and Machine Predictions" (in English). The Quarterly Journal of Economics. National Bureau of Economic Research Working Paper Series 133 (1): 237–293. doi:10.3386/W23180. PMID 29755141. PMC 5947971. https://www.wikidata.org/wiki/Q105835173. 
  19. "Research Statement – Lakkaraju". https://himalakkaraju.github.io/one-pager.pdf. 
  20. "Himabindu Lakkaraju; ASSISTANT PROFESSOR OF BUSINESS ADMINISTRATION". https://www.hbs.edu/faculty/Pages/profile.aspx?facId=1057381. 
  21. "SEAS Harvard – Lakkaraju". https://www.seas.harvard.edu/person/hima-lakkaraju-0. 
  22. 22.0 22.1 "National Science Foundation, in collaboration with Amazon, awards 11 Fairness in AI grant projects" (in en). February 10, 2021. https://www.amazon.science/academic-engagements/national-science-foundation-in-collaboration-with-amazon-awards-11-fairness-in-ai-grant-projects. 
  23. "NSF Award Search: Award # 2040989 – FAI: Towards Adaptive and Interactive Post Hoc Explanations". https://www.nsf.gov/awardsearch/showAward?AWD_ID=2040989&HistoricalAwards=false. 
  24. Lakkaraju, Himabindu; Kamar, Ece; Caruana, Rich; Leskovec, Jure (2019). "Faithful and Customizable Explanations of Black Box Models". Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. 2020. pp. 131–138. doi:10.1145/3306618.3314229. ISBN 9781450363242. 
  25. 25.0 25.1 Rawal, Kaivalya; Lakkaraju, Himabindu (2020). "Beyond Individualized Recourse: Interpretable and Interactive Summaries of Actionable Recourses". Advances in Neural Information Processing Systems 2020. https://papers.nips.cc/paper/2020/file/8ee7730e97c67473a424ccfeff49ab20-Paper.pdf. 
  26. Slack, Dylan; Hilgard, Sophie; Jia, Emily; Singh, Sameer; Lakkaraju, Himabindu (February 7, 2020). "Fooling LIME and SHAP". Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. 2019. pp. 180–186. doi:10.1145/3375627.3375830. ISBN 9781450371100. 
  27. Lakkaraju, Himabindu; Bastani, Osbert (February 7, 2020). ""How do I fool you?"". Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. pp. 79–85. doi:10.1145/3375627.3375833. ISBN 9781450371100. 
  28. Agarwal, Sushant; Jabbari, Shahin; Agarwal, Chirag; Upadhyay, Sohini; Zhiwei Steven Wu; Lakkaraju, Himabindu (2021). "Towards the Unification and Robustness of Perturbation and Gradient Based Explanations". International Conference on Machine Learning 2021. 
  29. Lakkaraju, Himabindu; Arsov, Nino; Bastani, Osbert (2020). "Robust and Stable Black Box Explanations". International Conference on Machine Learning 2020. http://proceedings.mlr.press/v119/lakkaraju20a.html. 
  30. Slack, Dylan; Hilgard, Sophie; Singh, Sameer; Lakkaraju, Himabindu (2020). "How Much Should I Trust You? Modeling Uncertainty of Black Box Explanations". arXiv:2008.05030 [cs.LG].
  31. Rawal, Kaivalya; Kamar, Ece; Lakkaraju, Himabindu (2020). "Understanding the Impact of Distribution Shifts on Algorithmic Recourse". arXiv:2012.11788 [cs.LG].
  32. "2020 Amazon Research Awards Recipients Announced". April 27, 2021. https://www.amazon.science/research-awards/program-updates/2020-amazon-research-awards-recipients-announced. 
  33. "Meet the Innovators Under 35 – MIT Technology Review". http://events.technologyreview.com/video/watch/himabindu-lakkaraju-humanitarians-tr35-2019/. 
  34. Fair, Vanity (October 3, 2019). "The Future Innovators Index 2019" (in en-US). Vanity Fair. https://www.vanityfair.com/news/2019/10/future-innovators-index-2019. Retrieved April 26, 2021. 
  35. axsdeny. "ABOUT | Rising Stars in EECS: 2016" (in en-US). http://risingstars.ece.cmu.edu/about/.