Hi! I am an assistant professor in Natural Language Processing Department at MBZUAI. I got my Ph.D degree at KAUST supervised by Prof. Xiangliang Zhang and Prof. Xin Gao. Prior to KAUST, I got my B.S degree in Computer Science at Wuhan University supervised by Prof. Qian Wang and my M.S degree in Peking University, supervised by Prof. Rui Yan and Prof. Dongyan Zhao.
My research focuses on human-centered natural language processing, aiming to make large language models both trustworthy and aligned with human understanding. I work on model safety, transparency, and reliability by interpreting jailbreak attacks, biases, multimodal risks, and improving response accuracy in high stakes domains. I also explore how models can better capture user intentions, emotions, and cultural contexts, enabling communication that reflects human values and empathy.
I am actively looking for highly motivated PhD student, Master’s students, and interns. All stipends are tax-free. If you are interested in working with me on generative language models, LLM-based agents, and related interdisciplinary topics please feel free to contact me at xiuying.chen@mbzuai.ac.ae. Here are some suggestions for preparing the email:
-
Please use Subject Line as: “RA/Master/PhD Prospective Student \$YourName”
-
Please highlight your publications (if applicable), GPA/ranking, TOEFL and IELTS scores, and anything important
-
Attach your CV
🔥 News
- 2025.6: 🎉1 paper accepted by ACM MM
- 2025.5: 🎉 5 papers accepted by ACL and 5 papers accepted by ACL findings
- 2025.4: 🎉 One paper accepted by Nature Computational Science (IF 12) and one by TOIS
- 2025.4: Two papers accepted by SIGIR 2025
- 2025.1: One paper accepted by NAACL 2025
- 2025.1: One paper accepted by Communications Chemistry (IF 6.581)
📝 Recent Publications
🧑🎨 Trustworthiness
-
Nature Computational Science 2025
Evaluating and mitigating bias in AI-based medical text generation, Xiuying Chen, Tairan Wang, Juexiao Zhou, Zirui Song, Xin Gao, Xiangliang Zhang -
ACL 2025
Shaping the Safety Boundaries: Understanding and Defending Against Jailbreaks in Large Language Models, Lang Gao, Jiahui Geng, Xiangliang Zhang, Preslav Nakov, Xiuying Chen, -
ACL 2025
Flipping Knowledge Distillation: Leveraging Small Models’ Expertise to Enhance LLMs in Text Matching, Mingzhe Li, Jing Xiang, Qishen Zhang, Kaiyang Wan, Xiuying Chen, -
NAACL 2025
Hazards in Daily Life? Enabling Robots to Proactively Detect and Resolve Anomalies, Zirui Song, Guangxian Ouyang, …, Xiuying Chen -
COLING 2025
Decoding Echo Chambers: LLM-Powered Simulations Revealing Polarization in Social Networks, Chenxi Wang, Zongfang Liu, Dequan Yang, Xiuying Chen
🎙 Personalization
-
ACM MM 2025
From Individuals to Crowds: Dual-Level Public Response Prediction in Social Media, Jinghui Zhang, Kaiyang Wan, Longwei Xu, Ao Li, Zongfang Liu, Xiuying Chen -
ACL finding 2025
Beyond Profile: From Surface-Level Facts to Deep Persona Simulation in LLMs, Zixiao Wang, Duzhen Zhang, Ishita Agrawal, Shen Gao, Le Song, Xiuying Chen
📒 Survey
-
IJCAI survey track 2024
Large language model based multi-agents: A survey of progress and challenges, Taicheng Guo, Xiuying Chen, Yaqi Wang, Ruidi Chang, Shichao Pei, Nitesh V Chawla, Olaf Wiest, Xiangliang Zhang. -
arxiv 2025
Injecting domain-specific knowledge into large language models: a comprehensive survey, Zirui Song, Bin Yan, Yuhan Liu, Miao Fang, Mingzhe Li, Rui Yan, Xiuying Chen
👥 HALO Lab
HALO Lab (Human-Aligned Language Optimization Lab)
Incoming PhD students of MBZUAI: Zirui Song, Lang Gao, Jinghui Zhang
PhD Students (as co-supervisor): Zixiao Wang (Co-supervised with Kun Zhang), Chong Tian (Co-supervised with Qirong Ho)
Master Students: Chenxi Wang, Ishita Agarwal, Besher Hassan (Co-supervised with Fajri Koto), Muhammad Cendekia Airlangga (Co-supervised with Kentaro Inui), Dequan Yang
Visiting students: Zixiang Xu, Yanbo Wang, Akash Ghosh
Past Research Associates: Kaiyang Wan, Yougang Lyu, Xueran Han
Alumni: Guoming Li
Talk to them if you want to know our lab better!
💬 Invited Talks and Tutorials
- 2024.05, Large Language Model–Driven Agents and Role-Playing, Chinese Information Processing Society of China, Urumqi
- 2023.09, AI Generated Text: Unlocking Accuracy, Trust, and Progress, Wuhan University
- 2023.03, Improving Abstractive Summarization Systems by Addressing Informativeness, Faithfulness, and Robustness, Renmin University of China
- 2023.03, Improving Abstractive Summarization Systems by Addressing Informativeness, Faithfulness, and Robustness, Shandong University
⭐ Services
- 2025, Action Editor, ACL, EMNLP; Associate Editor, Health Information Science and Systems
- 2024, Program Committee, ICLR, ICML, Action Editor, ACL, EMNLP
- 2023, Program Committee, AAAI, ACL, EMNLP, IJCAI, SIGIR, NeurIPS.
- 2022, Program Committee, AAAI, ACL, EMNLP, IJCAI, SIGIR.
- 2021, Program Committee, AAAI, ACL, EMNLP, SIGIR, Senior Program Committee, IJCAI.
🎨 Miscellaneous
I enjoy reading novels as much as I love research. “Romance of the Three Kingdoms,” “Dream of the Red Chamber,” “Fortress Besieged,” “Laughing in the Wind,” and “Gone with the Wind”… are among my favorite books. To me, it is an eternal and fascinating question of how to tell your story or present your research in an engaging way so that others can be attracted and feel excited as well. The goal I pursue in writing is to “simplify like the autumn tree, and innovate like the February flower” – a Chinese idiom that means to streamline and refine while also introducing fresh and original ideas.