Tu Vu

I am an Assistant Professor at Virginia Tech (VT). I am also a Faculty Researcher at Google. Prior to joining VT, I held the position of Research Scientist at Google DeepMind for a year after receiving my PhD in Computer Science from the University of Massachusetts Amherst, advised by Mohit Iyyer.
🔍 My research aims to develop effective and efficient methods for advancing and democratizing artificial intelligence in the era of large language models (LLMs). Specific areas of focus include:
- Advancing LLMs: improving LLMs’ critical capabilities, including reasoning and instruction following, and their emergent use in evaluation (e.g., LM-as-a-Judge or LLM-as-a-Critic)
- Transfer learning: reusing learned knowledge or components effectively across settings (e.g., tasks, languages, modalities, or models)
- LLM updating: keeping LLMs current by efficiently incorporating factual and up-to-date information with minimal retraining
- Parameter-efficient adaptation: adjusting LLMs to new distributions (e.g., unseen tasks, domains, or languages) efficiently, especially in low-resource settings.
⭐ For prospective PhD students
I plan to recruit one new PhD student every year. If you are interested in joining my group, please apply to the VT Graduate School and list me as a potential advisor. Please also check out the application deadlines and information for prospective students. Due to the high volume of emails I receive, I may not be able to respond to each one individually; please don't be discouraged — I may still review your application.
⭐ For Undergraduate and Masters students at VT
I am happy to collaborate on research with current VT students who have at least one full academic year until graduation. If you are interested, feel free to email me. I will follow up if there is a good fit.
Recent news
Jun. 2025 | ![]() ![]() |
---|---|
Jun. 2025 | ![]() |
Jun. 2025 | ![]() |
Apr. 2025 | ![]() ![]() |
Mar. 2025 | ![]() |
Nov. 2024 | ![]() ![]() |
Nov. 2024 | ✈️ Attended EMNLP 2024 in Miami, Florida 🌴 |
Nov. 2024 | ![]() |
Oct. 2024 | ![]() |
Oct. 2024 | ![]() |
Sep. 2024 | ![]() ![]() |
Aug. 2024 | ![]() |
Jul. 2024 | ![]() |
May. 2024 | ![]() ![]() |
Feb. 2024 | ![]() |
Jan. 2024 | ![]() ![]() |
Nov. 2023 | ![]() |
Oct. 2023 | ![]() |
Aug. 2023 | ![]() |
Jul. 2023 | ![]() ![]() ![]() |
Teaching
Advisees
Group:Weiyuan Chen (Incoming PhD student @ VT // advanced reasoning) | |
Jing Chen (Incoming PhD student @ VT // TBD) | |
Yu-Min Tseng (Incoming PhD student @ VT // TBD) | |
Noah Provenzano (1st year MS student @ VT // advanced reasoning) | |
Rituraj Sharma (Senior student & incoming MS student @ VT // advanced reasoning) | |
Nguyen Nguyen (Sophomore student @ VT // search-augmented LLMs) | |
Thinh Pham (1st year PhD student @ VT // search-augmented LLMs) | |
Rishab Balasubramanian (1st year PhD student @ VT // cross-model knowledge transfer) | |
Quyet Do (1st year PhD student @ VT // instruction following) | |
Pin-Jie Lin (1st year PhD student @ VT // efficient model development) |
Zhenting Qi (Student Researcher @ Google, Summer 2025) | |
Prateek Yadav (Research Intern @ Google DeepMind, Summer 2024 — Spring 2025) | |
Simeng Han (Student Researcher @ Google DeepMind, Summer 2024 — Spring 2025) | |
Salaheddin Alzubi (Masters student @ UMass Amherst, Fall 2022 — Spring 2023) | |
Dheeraj Mekala (PhD student @ UCSD, Spring — Summer 2022) |
Preprints
Selected publications
For an up-to-date list of my research papers, please see my Google Scholar profile. * denotes equal contribution.- EMNLPFoundational Autoraters: Taming Large Language Models for Better Automatic EvaluationIn Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024
- Technical reportGemini: A Family of Highly Capable Multimodal ModelsIn arXiv preprint arXiv:2312.11805, 2023
- ACLFreshLLMs: Refreshing large language models with search engine augmentationIn Findings of the Association for Computational Linguistics: ACL 2024, 2024