About Me
Welcome to my personal website! I’m Haizhou Shi (史海舟), a third-year Ph.D. student in the CS Department at Rutgers University, advised by Prof. Hao Wang. My research focuses on developing reliable and efficient methods for adapting machine learning models. I’m currently exploring two main areas: continual training of Large Language Models (LLMs) - including continued pre-training, post-training, and alignment - and uncertainty estimation in LLMs through Bayesian Deep Learning approaches.
Prior to Rutgers, I received my M.S. and B.S. degrees from the CS Department at Zhejiang University in 2022 and 2019, where I worked with Prof. Siliang Tang and Yueting Zhuang. My research there focused on developing generalizable representations across various learning paradigms, including unsupervised, weakly-supervised, federated, and continual learning.
News
- [02/2025] Our paper on Training-Free Bayesianization for LLMs got accepted at ICLR 2025 “QUESTION” Workshop!
- [01/2025] I will join Salesforce AI Reseach as a research intern in summer 2025!
- [01/2025] Our paper on Benchmarking Multimodal LLMs got accepted at NAACL 2025!
- [01/2025] I gave a talk on Bayesian Uncertainty Estimation for LLMs at Red Hat.
- [09/2024] Our paper on Bayesian Low-Rank Adaptation for LLMs got accepted at NeurIPS 2024!
- [04/2024] Our survey on Continual Learning of LLMs is out!
- [01/2024] I will join Morgan Stanley ML Research as a research intern in summer 2024!
- [09/2023] Our paper on Domain Incremental Learning got accepted at NeurIPS 2023!
- [09/2022] I was fortunate to join Rutgers as a Ph.D. student, to work with Prof. Hao Wang!
- [02/2022] Our paper on Federated Representation Learning got accepted at AAAI-FL-2022 workshop as oral presentation!
- [12/2021] Our paper on Lightweight Representation Learning got accepted at AAAI 2022 as oral presentation (top ~1.6%)!
Selected Publications
(where selection is completely based on my own bias, and “*” denotes equal contribution)



Multimodal Needle in a Haystack: Benchmarking Long-Context Capability of Multimodal LLMs
Hengyi Wang, Haizhou Shi, Shiwei Tan, Weiyi Qin, Wenyuan Wang, Tunyu Zhang, Akshay Nambi, Tanuja Ganu, Hao Wang
Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics (NAACL), 2025.
[paper] [code]





Towards Communication-efficient and Privacy-preserving Federated Representation Learning
Haizhou Shi, Youcai Zhang, Zijin Shen, Siliang Tang, Yaqian Li, Yandong Guo, Yueting Zhuang
International Workshop on Trustable, Verifiable and Auditable Federated Learning in Conjunction with AAAI (FL-AAAI), 2022.
[paper] [code] [talk]

Revisiting Catastrophic Forgetting in Class Incremental Learning
Zixuan Ni*, Haizhou Shi*, Siliang Tang, Longhui Wei, Qi Tian, Yueting Zhuang
Arxiv preprint, 2021.
[paper] [code]