me-2023.jpg
Khanh X. Nguyen

kxnguyen AT berkeley.edu

I am a Postdoctoral Research Fellow of the Center for Human-Compatible Artificial Intelligence (CHAI) at the University of California, Berkeley. I am fortunate to be mentored by Prof. Stuart Russell. Previously, I was a postdoc at the Princeton NLP group under the supervision of Prof. Karthik Narasimhan. I completed my PhD at the University of Maryland–College Park, advised by Prof. Hal Daumé III.

My research aims to create AI agents that can reliably serve and collaborate with humans. To accomplish goal, I focus on enhancing human-AI communication, building agents that can and are willing to communicate effectively with humans.

I am currently pushing forward three directions:

  • Learning from human feedback: I conducted the first simulated study on using reinforcement learning to train text generators with noisy human feedback (RLHF) [EMNLP’17]. Since then, I don’t think teaching agents with rewards is a good idea because it is a terrible way of communication. I am developing frameworks that enable learning from rich, abstract language [ICML’21, ArXiv’23].
  • Learning to ask questions: It is a mistake to think that only humans should ask AI for help and not the reverse. By asking a question, an agent can: (i) express its uncertainties (not just uncertainty), and (ii) obtain information to expand its capabilities. So more safety and more utility! I have authored a series of papers to disseminate this message [EMNLP’15’, CVPR’19, EMNLP’19, ICML’22].
  • Modeling humans and the world: I show that vanilla language models implement a very primitive “model of thought” [ToM@ICML’23]. To become more reliable, they need to develop robust models of the world and the humans in it. I recently focus on improving this capbility for instruction-generation models [ACL’23].

More facts:

  • My real name is Nguyễn Xuân Khánh :loud_sound:. My first name is usually confused with Khan or Kahn :(
  • I was born in Việt Nam :vietnam:, a peaceful country (click here for inspiration to visit us).
  • I am also proud to be a PTNK (Phổ Thông Năng Khiếu) alumnus.

news

Dec 20, 2022 New paper on task-oriented cognitive capabilities. TLDR; we found and improved the deficiency in the pragmatic capability of instruction generation models. Received outstanding paper award at the ToM workshop at ICML 2023.
Aug 17, 2022 I will be organizing InterNLP workshop at NeurIPS 2022. Please submit your papers if interested!

selected publications

  1. Reinforcement Learning for Bandit Neural Machine Translation with Simulated Human Feedback
    Khanh Nguyen, Hal Daumé III, and Jordan Boyd-Graber
    EMNLP, 2017
    • First simulated study on training text generators with reinforcement learning from noisy human feedback (RLHF)
    • Later upgrades of this approach are powering large language models
  2. Posterior calibration and exploratory analysis for natural language processing models
    Khanh Nguyen, and Brendan O’Connor
    EMNLP, 2015
    • First paper on calibration for structured prediction
    • Inspires subsequent studies on calibration of neural networks, out-of-distribution detection methods, calibration theories, etc.
  3. Interactive Learning from Activity Description
    Khanh Nguyen, Dipendra Misra, Robert Schapire, Miro Dudı́k, and Patrick Shafto
    ICML, 2021
    • One of the first frameworks for learning from language feedback with theoretical guanrantees
  4. Help, Anna! Visual Navigation with Natural Multimodal Assistance via Retrospective Curiosity-Encouraging Imitation Learning
    Khanh Nguyen, and Hal Daumé III
    EMNLP, 2019
    • First paper that introduces the task of vision-language navigation with human assistance.
    • Evaluates collaborative capability rather than autonomous capability