Jiayi Geng

 

ph.jpg

jiayig@princeton.edu

I am an incoming Computer Science PhD student at the Language Technologies Institute at Carnegie Mellon University, advised by Prof. Graham Neubig.

My research goal is to explore and enable machines to beyond human intelligence. More specifically, I am interested in (1) drawing inspiration from cognitive psychology to understand the deep interconnection between machines and humans, particularly focusing on the interplay between memory and reasoning, and (2) designing appropriate algorithms and methods to push the limits of LLM deliberation.

Before CMU, I received my Master’s degree at Princeton University, advised by Prof. Danqi Chen and Prof. Thomas L. Griffiths, and my Bachelor’s degree at McGill University, advised by Prof. Xue (Steve) Liu, and Prof. Eric D. Kolaczyk.

News

2025-05 Graduated from Princeton University and started my PhD at LTI CMU!
2025-05 Our paper: Mind Your Step (by Step): Chain-of-Thought can Reduce Performance on Tasks where Thinking Makes Humans Worse has been accepted by ICML 2025!
2025-01 Our paper: Large Language Models Assume People are More Rational than We Really are has been accepted by ICLR 2025!
2024-05 Our paper Language Models as Science Tutors has been accepted by ICML 2024!
2023-09 Started my Master study at Princeton University! :tiger:

Selected publications

(* indicates equal contribution)

  1. reverse.png
    Are Large Language Models Reliable AI Scientists? Assessing Reverse-Engineering of Black-Box Systems
    Jiayi Geng*, Howard Chen*, Dilip Arumugam, and Thomas L Griffiths
    arXiv preprint arXiv:2505.17968, 2025
  2. cognitive.png
    Using the tools of cognitive science to understand large language models at different levels of analysis
    Alexander Ku, Declan Campbell, Xuechunzi Bai, Jiayi Geng, 8 authors, and Thomas L Griffiths
    arXiv preprint arXiv:2503.13401, 2025
  3. cm.png
    Continual Memorization of Factoids in Large Language Models
    Howard Chen*Jiayi Geng*, Adithya Bhaskar, Dan Friedman, and Danqi Chen
    arXiv preprint 2411.07175, 2024
  4. mind_your_step.png
    Mind your step (by Step): Chain-of-thought Can Reduce Performance on Tasks Where Thinking Makes Humans Worse
    Ryan Liu*Jiayi Geng*, Addison J Wu, Ilia Sucholutsky, Tania Lombrozo, and Thomas L Griffiths
    ICML, 2025
  5. treebon.png
    TreeBoN: Enhancing Inference-Time Alignment with Speculative Tree-Search and Best-of-N Sampling
    Jiahao Qiu, Yifu Lu, Yifan Zeng, Jiacheng Guo, Jiayi Geng, Huazheng Wang, Kaixuan Huang, Yue Wu, and Mengdi Wang
    arXiv preprint 2410.16033, 2024
  6. llm_rationality.png
    Large language Models Assume People are More Rational than We Really are
    Ryan Liu*Jiayi Geng*, Joshua C Peterson, Ilia Sucholutsky, and Thomas L Griffiths
    ICLR, 2025
  7. science_tutor.png
    Language Models as Science Tutors
    Alexis Chevalier, Jiayi Geng, Alexander Wettig, Howard Chen, 16 authors, Sanjeev Arora, and Danqi Chen
    ICML, 2024