Tu Vu

prof_pic.jpg

I am an Assistant Professor at Virginia Tech (VT). I am also a Faculty Researcher at Google. Prior to joining VT, I held the position of Research Scientist at Google DeepMind for a year after receiving my PhD in Computer Science from the University of Massachusetts Amherst, advised by Mohit Iyyer.

🔍 My research aims to develop effective and efficient methods for advancing and democratizing artificial intelligence in the era of large language models (LLMs). Specific areas of focus include:

  • Advancing LLMs: improving LLMs’ critical capabilities, including reasoning and instruction following, and their emergent use in evaluation (e.g., LM-as-a-Judge or LLM-as-a-Critic)
  • Transfer learning: reusing learned knowledge or components effectively across settings (e.g., tasks, languages, modalities, or models)
  • LLM updating: keeping LLMs current by efficiently incorporating factual and up-to-date information with minimal retraining
  • Parameter-efficient adaptation: adjusting LLMs to new distributions (e.g., unseen tasks, domains, or languages) efficiently, especially in low-resource settings.
⭐ For prospective PhD students

I plan to recruit one new PhD student every year. If you are interested in joining my group, please apply to the VT Graduate School and list me as a potential advisor. Please also check out the application deadlines and information for prospective students. Due to the high volume of emails I receive, I may not be able to respond to each one individually; please don't be discouraged — I may still review your application.

⭐ For Undergraduate and Masters students at VT

I am happy to collaborate on research with current VT students who have at least one full academic year until graduation. If you are interested, feel free to email me. I will follow up if there is a good fit.


Recent news

Jun. 2025 :page_facing_up: One paper to appear at TMLR 2025 on model merging at scale! :tada:
Jun. 2025 :speaking_head: Invited guest lecture at The New Turing Institute
Jun. 2025 :page_facing_up: New preprint on a challenge benchmark for LLM reasoning over conflicting evidence
Apr. 2025 :moneybag: I received the New Faculty Mentoring Grant from VT :pray:
Mar. 2025 :page_facing_up: New preprint on fine-tuning transfer for efficient model development
Nov. 2024 :moneybag: Our lab received a research gift from Adobe :pray:
Nov. 2024 ✈️ Attended EMNLP 2024 in Miami, Florida 🌴
Nov. 2024 :speaking_head: Invited talk at Qualcomm Seminar Series
Oct. 2024 :speaking_head: Invited talk at Mila / McGill NLP seminar
Oct. 2024 :page_facing_up: New preprint on model merging at scale
Sep. 2024 :page_facing_up: One paper to appear at EMNLP 2024 on foundational autoraters (FLAMe)! :tada:
Aug. 2024 :briefcase: I started my professorship at Virginia Tech
Jul. 2024 :page_facing_up: New preprint on Foundational Autoraters (FLAMe)
May. 2024 :page_facing_up: FreshLLMs got accepted to ACL 2024 Findings! :tada:
Feb. 2024 :briefcase: I am now serving as an Area Chair for ACL Rolling Review (ARR)
Jan. 2024 :page_facing_up: Flan-MoE got accepted to ICLR 2024! :tada:
Nov. 2023 :speaking_head: Invited talk at Graph Neural Networks Reading Group, Google
Oct. 2023 :page_facing_up: New preprint on LLM factuality (FreshLLMs)
Aug. 2023 :briefcase: I joined Google DeepMind in Mountain View, CA as a Research Scientist
Jul. 2023 :mortar_board: I successfully defended my PhD thesis! :tada: :champagne:

Teaching


Advisees

Group:
Weiyuan Chen (Incoming PhD student @ VT // advanced reasoning)
Jing Chen (Incoming PhD student @ VT // TBD)
Yu-Min Tseng (Incoming PhD student @ VT // TBD)
Noah Provenzano (1st year MS student @ VT // advanced reasoning)
Rituraj Sharma (Senior student & incoming MS student @ VT // advanced reasoning)
Nguyen Nguyen (Sophomore student @ VT // search-augmented LLMs)
Thinh Pham (1st year PhD student @ VT // search-augmented LLMs)
Rishab Balasubramanian (1st year PhD student @ VT // cross-model knowledge transfer)
Quyet Do (1st year PhD student @ VT // instruction following)
Pin-Jie Lin (1st year PhD student @ VT // efficient model development)
Others:
Zhenting Qi (Student Researcher @ Google, Summer 2025)
Prateek Yadav (Research Intern @ Google DeepMind, Summer 2024 — Spring 2025)
Simeng Han (Student Researcher @ Google DeepMind, Summer 2024 — Spring 2025)
Salaheddin Alzubi (Masters student @ UMass Amherst, Fall 2022 — Spring 2023)
Dheeraj Mekala (PhD student @ UCSD, Spring — Summer 2022)

Preprints

  1. Preprint
    SealQA: Raising the Bar for Reasoning in Search-Augmented Language Models
    Thinh PhamNguyen NguyenPratibha ZunjareWeiyuan ChenYu-Min Tsengand Tu Vu
    In arXiv preprint arXiv:2506.01062, 2025
  2. Preprint
    Efficient Model Development through Fine-tuning Transfer
    Pin-Jie LinRishab BalasubramanianFengyuan LiuNikhil Kandpaland Tu Vu
    In arXiv preprint arXiv:2503.20110, 2025

Selected publications

For an up-to-date list of my research papers, please see my Google Scholar profile. * denotes equal contribution.
  1. EMNLP
    Foundational Autoraters: Taming Large Language Models for Better Automatic Evaluation
    Tu Vu*Kalpesh Krishna*Salaheddin AlzubiChris TarManaal Faruquiand Yun-Hsuan Sung
    In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024
    // The top-performing generative model on RewardBench as of July 15, 2024, trained only on publicly available data
  2. TMLR
    What Matters for Model Merging at Scale?
    Prateek YadavTu VuJonathan LaiAlexandra ChronopoulouManaal FaruquiMohit Bansaland Tsendsuren Munkhdalai
    In Transactions on Machine Learning Research, 2025
  3. Technical report
    Gemini: A Family of Highly Capable Multimodal Models
    Google Gemini Team: Rohan AnilSebastian BorgeaudYonghui WuJean-Baptiste AlayracJiahui YuRadu SoricutJohan SchalkwykAndrew DaiAnja Hauthand  others including Tu Vu
    In arXiv preprint arXiv:2312.11805, 2023
    // Google AI Blog
  4. ACL
    FreshLLMs: Refreshing large language models with search engine augmentation
    Tu VuMohit IyyerXuezhi WangNoah ConstantJerry WeiJason WeiChris TarYun-Hsuan SungDenny ZhouQuoc Leand Thang Luong
    In Findings of the Association for Computational Linguistics: ACL 2024, 2024
    // Our dataset and method have inspired or been used for the development of Google’s Gemini, Perplexity.AI’s Online LLMs, You.com, and Contextual AI’s RAG 2.0
  5. ICML
    The Flan Collection: Designing Data and Methods for Effective Instruction Tuning
    Shayne LongpreLe HouTu VuAlbert WebsonHyung Won ChungYi TayDenny ZhouQuoc V LeBarret ZophJason Weiand Adam Roberts
    In Proceedings of the 40th International Conference on Machine Learning, 2023
    // Google Research Blog
  6. ICLR
    Mixture-of-experts meets instruction tuning: A winning combination for large language models
    Sheng ShenLe HouYanqi ZhouNan DuShayne LongpreJason WeiHyung Won ChungBarret ZophWilliam FedusXinyun ChenTu VuYuexin WuWuyang ChenAlbert WebsonYunxuan LiVincent ZhaoHongkun YuKurt KeutzerTrevor Darrelland Denny Zhou
    In Proceedings of the 12th International Conference on Learning Representations, 2024
  7. ACL
    SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
    Tu VuBrian LesterNoah ConstantRami Al-Rfouand Daniel Cer
    In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022
    // Headlines of Google AI’s Natural Language Accelerated Newsletter Q1, 2022
  8. EMNLP
    Overcoming Catastrophic Forgetting in Zero-Shot Cross-Lingual Generation
    Tu VuAditya BaruaBrian LesterDaniel CerMohit Iyyerand Noah Constant
    In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022
  9. EMNLP
    STraTA: Self-Training with Task Augmentation for Better Few-shot Learning
    Tu VuThang LuongQuoc LeGrady Simonand Mohit Iyyer
    In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021
  10. EMNLP
    Exploring and Predicting Transferability across NLP Tasks
    Tu VuTong WangTsendsuren MunkhdalaiAlessandro SordoniAdam TrischlerAndrew Mattarella-MickeSubhransu Majiand Mohit Iyyer
    In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, 2020