FAQs

If you previously had my WeChat, please add my new one by replacing the '-' symbol in my wechat ID with '_'.

Education & Work


Zeyuan with Master Degree of Science
under the supervision of Prof. Silvio Micali
@ MIT, 2012
MIT

Tsinghua University

Some Awards

A family photo of my gold and silver medals
A family photo of my gold and silver medals

In algorithm competitions, I was fortunate to win a few awards in my past life, including two IOI gold medals, a USACO world champion, an ACM/ICPC world-final gold medal (2nd place), a Google Codejam world runner-up, and a USA MCM Oustanding Prize.

In research, I used to be supported by a Microsoft Young Fellow Award, a Simons Student Award and a Microsoft Azure Research Award.

For a full list, click here.


Personal Information

Research Interests

My current research focuses on Physics of Language Models—a scientific framework to uncover the universal laws governing how large AI models learn, reason, and generalize. I design controlled experiments and probe neurons to reveal the mechanisms behind their strengths and failures, aiming to provide both theoretical insight and practical guidance—on data preparation, pre-/post-training, and architecture—for building better and safer AGI beyond today’s AI systems. This line of work was featured in my ICML2024 tutorial (see below).

Before that, I worked on the mathematics of deep learning, developing proofs on the learnability of neural networks to explain phenomena observed in practice. Our work on ensemble and knowledge distillation received the ICLR 2023 Best Paper Runner-Up, and our COLT 2023 paper provided the first formal proof of why and how deep networks perform deep learning (e.g., achieve superior performance over layer-wise training). This theoretical line inspired our LoRA fine-tuning method, now widely adopted across the AI community, and continues to shape the Physics of Language Models.

Earlier in my career, I also worked on optimization theory and theoretical computer science.

Email