Publications
LeTI: Learning to Generate from Textual Interactions. Xingyao Wang, Hao Peng, Reyhaneh Jabbarvand, and Heng Ji.
Preprint, 2023.Improving Language Model Negotiation with Self-Play and In-Context Learning from AI Feedback. Yao Fu, Hao Peng, Tushar Khot, and Mirella Lapata.
Preprint, 2023.Efficiency Pentathlon: A Standardized Arena for Efficiency Evaluation. Hao Peng, Qingqing Cao, Jesse Dodge, Matthew E. Peters, Jared Fernandez, Tom Sherborne, Kyle Lo, Sam Skjonsberg, Emma Strubell, Darrell Plessas, Iz Beltagy, Evan Pete Walsh, Noah A. Smith, and Hannaneh Hajishirzi. Preprint, 2023.
Chain-of- Thought Hub: A Continuous Effort to Measure Large Language Models’ Reasoning Performance. Yao Fu, Litu Ou, Mingyu Chen, Yuhao Wan, Hao Peng, and Tushar Khot. Challenges of Deploying Generative AI Workshop at ICML, 2023. [Benchmark]
Specializing Smaller Language Models towards Multi-Step Reasoning. Yao Fu, Hao Peng, Litu Ou, Ashish Sabharwal, and Tushar Khot. In Proceedings of the International Conference on Machine Learning (ICML), 2023. Oral.
Complexity-Based Prompting for Multi-Step Reasoning. Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, and Tushar Khot. In Proceedings of the International Conference on Learning Representations (ICLR), 2023.
Transparency Helps Reveal When Language Models Learn Meaning. Zhaofeng Wu, William Merrill, Hao Peng, Iz Beltagy, and Noah A. Smith. Transactions of the Association for Computational Linguistics (TACL), 2022.
How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers. Michael Hassid, Hao Peng, Daniel Rotem, Jungo Kasai, Ivan Montero, Noah A. Smith, and Roy Schwartz. In Findings of the Conference on Empirical Methods in Natural Language Processing (EMNLP Findings), 2022.
Modeling Context With Linear Attention for Scalable Document-Level Translation. Zhaofeng Wu, Hao Peng, Nikolaos Pappas, and Noah A. Smith. In Findings of the Conference on Empirical Methods in Natural Language Processing (EMNLP Findings), 2022.
Twist Decoding: Diverse Generators Guide Each Other. Jungo Kasai, Keisuke Sakaguchi, Ronan Le Bras, Hao Peng, Ximing Lu, Dragomir Radev, Yejin Choi, and Noah A. Smith. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022.
ABC: Attention with Bounded-memory Control. Hao Peng, Jungo Kasai, Nikolaos Pappas, Dani Yogatama, Zhaofeng Wu, Lingpeng Kong, Roy Schwartz, and Noah A. Smith. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL), 2022.
Tailor: Generating and Perturbing Text with Semantic Controls. Tongshuang Wu, Alexis Ross, Hao Peng, Matthew E. Peters, and Matt Gardner. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL), 2022.
Finetuning Pretrained Transformers into RNNs. Jungo Kasai, Hao Peng, Yizhe Zhang, Dani Yogatama, Gabriel Ilharco, Nikolaos Pappas, Yi Mao, Weizhu Chen, and Noah A. Smith. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021.
Random Feature Attention. Hao Peng, Nikolaos Pappas, Dani Yogatama, Roy Schwartz, Noah A. Smith, and Lingpeng Kong. In Proceedings of the International Conference on Learning Representations (ICLR), 2021. Spotlight.
Deep Encoder, Shallow Decoder: Reevaluating the Speed-Quality Tradeoff in Machine Translation. Jungo Kasai, Nikolaos Pappas, Hao Peng, James Cross, and Noah A. Smith. In Proceedings of the International Conference on Learning Representations (ICLR), 2021.
Contextualized Perturbation for Textual Adversarial Attack. Dianqi Li, Yizhe Zhang, Hao Peng, Liqun Chen, Chris Brockett, Ming-Ting Sun, and Bill Dolan. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2021.
Infusing Finetuning with Semantic Dependencies. Zhaofeng Wu, Hao Peng, and Noah A. Smith. Transactions of the Association for Computational Linguistics (TACL), 2020.
A Mixture of h − 1 Heads is Better than h Heads. Hao Peng, Roy Schwartz, Dianqi Li, and Noah A. Smith. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL), 2020.
PaLM: A Hybrid Parser and Language Model. Hao Peng, Roy Schwartz, and Noah A. Smith. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019.
RNN Architecture Learning with Sparse Regularization. Jesse Dodge, Roy Schwartz, Hao Peng, and Noah A. Smith. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019.
Text Generation with Exemplar-based Adaptive Decoding. Hao Peng, Ankur P. Parikh, Manaal Faruqui, Bhuwan Dhingra, and Dipanjan Das. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2019.
Rational Recurrences. Hao Peng, Roy Schwartz, Sam Thomson, and Noah A. Smith. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), 2018.
Backpropagating through Structured Argmax using a SPIGOT. Hao Peng, Sam Thomson, and Noah A. Smith. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL), 2018. Best Paper Honorable Mention
Learning Joint Semantic Parsers from Disjoint Data. Hao Peng, Sam Thomson, Swabha Swayamdipta, and Noah A. Smith. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2018.
“You are no Jack Kennedy”: On Media Selection of Highlights from Presidential Debate. Chenhao Tan, Hao Peng, and Noah A. Smith. In Proceedings of The Web Conference (WWW), 2018.
Deep Multitask Learning for Semantic Dependency Parsing. Hao Peng, Sam Thomson, and Noah A. Smith. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL), 2017.
News Citation Recommendation with Implicit and Explicit Semantics. Hao Peng, Jing Liu, and Chin-Yew Lin. In Proceedings of the Annual Meeting of the Association for Computational Linguisticss (ACL), 2016.
A Convolutional Attention Network for Extreme Summarization of Source Code. Miltiadis Allamanis, Hao Peng, and Charles Sutton. In Proceedings of the International Conference on Machine Learning (ICML), 2016.
Discriminative Neural Sentence Modeling by Tree-based Convolution. Lili Mou*, Hao Peng*, Ge Li, Yan Xu, Lu Zhang, and Zhi Jin (*: Equal contribution). In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), 2015.
Classifying Relations via Long Short Term Memory Networks along Shortest Dependency Paths. Yan Xu, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, and Zhi Jin. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), 2015.