I am an incoming PhD student to Princeton Language and Intelligence (PLI) at Princeton University, advised by Danqi Chen. My research interests broadly lie in the intersection of natural language processing and machine learning. I am currently interested in language models and agents; in particular, I aim to study the downstream effects of pretraining data and methods to improve the capabilities and efficiency of reasoning models. Below are a few questions I am interested in:

Data:

  • How does pretraining data influence language models as sources of knowledge? Dated Data
  • Can we attribute content generated by models back to their pretraining corpus?
  • How do we best correct misalignments arising from knowledge conflicts in models' pretraining data?

Reasoning:

  • Can we make reasoning models more efficient by shifting away from a discrete token space and perform reasoning in continuous latent space? Compressed Chain of Thought
  • How much better would reasoning models be if trained with process rewards rather than just outcome rewards?
  • How can we construct environments with verifiable rewards and/or induce structure into reasoning chains to make models more capabale and efficient?

Previously, I recieved my Master’s at Johns Hopkins University, advised by Benjamin Van Durme. My prior research interests were in mathematics and fluid dynamics. I performed research in these areas during my undergraduate studies at Duke University, advised by Tarek Elgindi.

I would love to discuss my research and any opportunities! Feel free to email me at jeffrey.{lastname}{11*12}@gmail.com. Outside of research, I am an avid climber and chess player.

News

May 2025Final Answer accepted to ACL 2025!
Feb 2025New preprint, Is That Your Final Answer?, released! [paper] [tweets]
Dec 2024New preprint, Compressed Chain of Thought, released! [paper] [tweets]
Oct 2024Attended CoLM 2024 and presented Dated Data (Outstanding Paper Award, 0.4% )
Jul 2024Dated Data accepted to CoLM 2024!
Mar 2024New preprint, Dated Data, is out! [paper] [tweets]