Hao Peng

hao/headshot.jpg

3314 SC

201 North Goodwin Avenue

Urbana, IL 61801

I am an Assistant Professor at the Department of Computer Science of the University of Illinois at Urbana-Champaign (UIUC).

I received my Ph.D. from the University of Washington, with Noah Smith, and my Bachelors Degree from Peking University. I spent one year at the Allen Institute for Artificial Intelligence as a Young Investigator, and time at Microsoft Research, Google, and DeepMind as an intern.

My research interest broadly spans natural language processing and machine learning. My current interests primarily include making language AI more efficient and accessible, and evaluating and improving large language models’ reasoning capabilities, factuality, and trustworthiness, and their applications in the scientific domain.

Outside of work, I cater to the whims of a trio of furry overlords: Meera, Loki, and Sylvie. When they release me from their service, I cycle in the summer, and (backcountry) ski in the winter.

news

Apr 12, 2024 I will give a talk at UChicago and TTIC.
Apr 11, 2024 I will give a talk at the Argonne National Laboratory.
Feb 15, 2024 Pretrained LLMs can be adapted to handle 128K-long context with surprisingly small amount of continual pretraining. Check out our new preprint!
Feb 1, 2024 LLMs become better agents when they take actions by generating Python code: preprint. Feel free to chat with our demo.
Oct 27, 2023 I gave a talk at the Generative AI Workshop at NCSA.

recent publications

  1. Executable Code Actions Elicit Better LLM Agents
    Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhang, Yunzhu Li, Hao Peng, and Heng Ji
    arXiv preprint, 2024
  2. Data Engineering for Scaling Language Models to 128K Context
    Yao Fu, Rameswar Panda, Xinyao Niu, Xiang Yue, Hannaneh Hajishirzi, Yoon Kim, and Hao Peng
    arXiv preprint, 2024
  3. Examining LLMs’ Uncertainty Expression Towards Questions Outside Parametric Knowledge
    Genglin Liu, Xingyao Wang, Lifan Yuan, Yangyi Chen, and Hao Peng
    arXiv preprint, 2024
  4. spotlight
    TRAM: Bridging Trust Regions and Sharpness Aware Minimization
    Tom Sherborne, Naomi Saphra, Pradeep Dasigi, and Hao Peng
    In Proceedings of the International Conference on Learning Representations (ICLR), 2024
  5. MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback
    Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, and Heng Ji
    In Proceedings of the International Conference on Learning Representations (ICLR), 2024
  6. CRAFT: Customizing LLMs by Creating and Retrieving from Specialized Toolsets
    Lifan Yuan, Yangyi Chen, Xingyao Wang, Yi R. Fung, Hao Peng, and Heng Ji
    In Proceedings of the International Conference on Learning Representations (ICLR), 2024
  7. FiLM: Fill-in Language Models for Any-Order Generation
    Tianxiao Shen, Hao Peng, Ruoqi Shen, Yao Fu, Zaid Harchaoui, and Yejin Choi
    arXiv preprint, 2023