Hanjie Chen
Department of Computer Science
Address: Duncan Hall 2081, 6100 Main St, Houston, TX 77005
Email: hanjie@rice.edu
About Me
I am an Assistant Professor in the Computer Science Department @ Rice University. I am also affiliated with the Ken Kennedy Institute. I am broadly interested in Natural Language Processing, Interpretable Machine Learning, and Trustworthy AI. My research focuses on understanding the properties, mechanisms, and capabilities of neural language models, enabling their alignment, interaction, and collaboration with humans, and enhancing their impact on real-world applications such as medicine, healthcare, sports, and more. By developing explainable AI techniques, I aim to make intelligent systems controllable by system developers, accessible to general users, applicable to various domains, and beneficial to society.
Prior to joining Rice, I was a Postdoctoral Fellow in the Center for Language and Speech Processing @ Johns Hopkins University from 2023 to 2024, hosted by Dr. Mark Dredze. I completed my Ph.D. in Computer Science in May 2023 at the University of Virginia, where my advisor was Dr. Yangfeng Ji.
🎓 Received the Outstanding Doctoral Student Award, UVA, 2023
🎉 Received the John A. Stankovic Graduate Research Award, UVA, 2023
🎉 I was awarded the Carlos and Esther Farrar Fellowship, 2022 - 2023
👩🏫 Received the UVA CS Outstanding Graduate Teaching Award and University-wide Graduate Teaching Awards Nominee (top 5% of graduate instructors) for the course, CS 6501/4501 Interpretable Machine Learning, I co-designed and instructed at UVA in Spring 2022
📝 I maintain a Reading List with interesting papers
-
Updates
- [07/2024] Invited to serve as a Senior Area Chair for ACL 2025
- [06/2024] Tutorial on Explanation in the Era of Large Language Models @ NAACL 2024
- [06/2024] Invited to serve as an Area Chair for EMNLP 2024
- [04/2024] New preprint, Will the Real Linda Please Stand up...to Large Language Models? Examining the Representativeness Heuristic in LLMs, arXiv
- [03/2024] Invited talk at How Sustainable is Artificial Intelligence?
- [02/2024] New preprint, Benchmarking Large Language Models on Answering and Explaining Challenging Medical Questions, arXiv
- [02/2024] New preprint, RORA: Robust Free-Text Rationale Evaluation, arXiv
- [01/2024] Co-teach the course Trustworthy and Responsible NLP with Sharon Levy, Spring 2024, JHU
- [12/2023] Co-host the 22nd MLNLP Seminar
- [11/2023] Guest lecture on Interpretable and Explainable NLP at 601.467/667 Introduction to Human Language Technology @ JHU
- [11/2023] Guest lecture on Interpretable and Explainable NLP at CSCI 699 Ethics in NLP @ USC
- [10/2023] I will co-organize the BlackboxNLP 2024 Workshop at EMNLP 2024
- [10/2023] Our tutorial Explanation in the Era of Large Language Models is accepted to appear at NAACL 2024
- [09/2023] New preprint, Explainability for Large Language Models: A Survey, arXiv
- [05/2023] REV: Information-Theoretic Evaluation of Free-Text Rationales is accepted by ACL 2023
- [03/2023] Invited Talk on Bridging the Trustworthy Gap between AI and Humans: Interpretation Techniques for Modern NLP at the CLSP Seminar @ Johns Hopkins University
- [12/2022] Presentation on Information-Theoretic Evaluation of Free-Text Rationales with Conditional V-Information at Trustworthy and Socially Responsible Machine Learning (TSRML) Workshop @ NeurIPS
- [11/2022] Presentation on Explaining Predictive Uncertainty by Looking Back at Model Explanations at WiML Workshop 2022 @ NeurIPS
- [10/2022] Talk on REV: Information-Theoretic Evaluation of Free-Text Rationales @ Allen Institute for AI (AI2)
- [05/2022] Paper presentation on Pathologies of Pre-trained Language Models in Few-shot Fine-tuning at Insights Workshop @ ACL 2022
- [02/2022] Paper presentation on Adversarial Training for Improving Model Robustness? Look at Both Prediction and Interpretation @ AAAI 2022
- [12/2021] Invited talk on Improving Model Robustness via Interpretation-based Adversarial Training @ MLNLP
- [12/2021] Presentation on Adversarial Training for Improving Model Robustness? Look at Both Prediction and Interpretation at WiML Workshop 2021 @ NeurIPS
- [12/2021] Presentation on Adversarial Training for Improving Model Robustness? Look at Both Prediction and Interpretation @ 2021 Fall UVA CS Research Symposium
- [06/2021] Paper presentation on Explaining Neural Network Predictions on Sentence Pairs via Learning Word-Group Masks @ NAACL 2021
- [04/2021] 2021 CRA-WP Grad Cohort for Women Workshop
- [03/2021] Poster presentation @ ACM Capital Region Celebration of Women in Computing (CAPWIC)
- [11/2020] Paper presentation on Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers @ EMNLP 2020
- [08/2020] Presentation on Learning Variational Masks for Explainable Next Utterance Prediction in Dialog Systems at IBM Research
- [07/2020] Paper presentation on Generating Hierarchical Explanations on Text Classification via Feature Interaction Detection @ ACL 2020
- [12/2019] Poster presentation on Improving the Explainability of Neural Sentiment Classifiers via Data Augmentation @ NeurIPS 2019 Workshop on Robust AI in Financial Services
- [10/2019] Poster presentation on Building Hierarchical Interpretations in Natural Language via Feature Interaction Detection @ UVA CS Research Symposium Fall 2019
- [04/2019] Invited talk on How to Train a More Interpretable Neural Text Classifier, AIML-Seminar @ UVA
Research Experience
- Johns Hopkins University, Baltimore, MD, June 2023 - June 2024
Center for Language and Speech Processing
- Postdoctoral Fellow
- Advisor: Mark Dredze
- University of Virginia, Charlottesville, VA, Aug. 2018 - May 2023
Information and Language Processing Lab
- Research Assistant
- Advisor: Yangfeng Ji
- Allen Institute for AI (AI2), Seattle, WA, May 2022 - Oct. 2022
Mosaic Group
- Research Intern
- Manager: Yejin Choi
- Mentors: Swabha Swayamdipta, Faeze Brahman, Xiang Ren
- Microsoft Research, Redmond, WA, May 2021 - Aug. 2021
Language and Information Technologies Group
- Research Intern
- Manager: Ahmed H. Awadallah
- Mentors: Guoqing Zheng, Srinagesh Sharma
- IBM Research, New York, NY, Jun. 2020 - Aug. 2020
Thomas J. Watson Research Center
- Research Intern
- Manager: Luis Lastras
- Mentors: Chulaka Gunasekara, Song Feng, Hui Wan, Jatin Ganhotra, Sachindra Joshi
Professional Service
- Organizer: BlackboxNLP Workshop @ EMNLP 2024
- Senior Area Chair: ACL 2025
- Area Chair: EMNLP 2024, ACL ARR 2024 - now, WiML Workshop @ NeurIPS 2022
- Program Committee: COLING 2025, ACL 2023, AAAI 2023, EMNLP 2021 - 2023, NAACL 2021, EACL 2023, CoNLL 2021 - 2022, NLPCC 2022, ACL DialDoc Workshop 2022, EMNLP BlackboxNLP Workshop 2021, 2023, NeurIPS Explainable AI Approaches for Debugging and Diagnosis Workshop 2021, Document-grounded Dialogue Workshop 2021, MASC-SLL 2020
- Reviewer: TACL 2023 - 2025, ICLR 2024 - 2025, COLM 2024, NeurIPS 2023, EMNLP 2023, ACL ARR 2021 - now, ACL 2020 - 2021, EMNLP BlackboxNLP Workshop 2022, CoNLL 2019 - 2020, NLPCC 2019 - 2021
- Diversity Representative for UVA Computer Science Graduate Student Group (CSGSG) Council, 2022