Home

Furong Huang is an Associate Professor in the Department of Computer Science at the University of Maryland. Specializing in trustworthy machine learning, AI for sequential decision-making, and high-dimensional statistics, Dr. Huang focuses on applying theoretical principles to solve practical challenges in contemporary computing.

Her research centers on creating reliable and interpretable machine learning models that operate effectively in real-world settings. She has also made significant strides in sequential decision-making, aiming to develop algorithms that optimize performance and adhere to ethical and safety standards.

Student Highlights: 7 exceptional students from our group are on the job market this year, ready to bring their expertise to new frontiers! 6 of them are pursuing industrial roles, while 1 will be on the academic job market looking for postdoctoral and faculty positions.

These talented researchers work in cutting-edge areas, including AI Security, AI Agents, Alignment, Ethics, Fairness, and Responsible AI. They have deep experience with Generative AI, LLMs, VLMs, and VLAs, driving innovations in Weak-to-Strong Generalization and AI safety.

Don’t miss this opportunity to connect with these future leaders in AI. Check out their profiles here!

Graduating Lab Members

Academic Positions

  • 2024 - present Tenured Associate Professor

    University of Maryland
    Department of Computer Science

  • 2017 - 2024 TTK Assistant Professor

    University of Maryland
    Department of Computer Science

  • 2016-2017 Postdoctoral Researcher

    Microsoft Research NYC
    Mentors: John Langford, Robert Schapire

  • 2010-2016 Doctoral Researcher

    University of California, Irvine
    Advisor: Anima Anandkumar

Recent News

Jan. 2025

Paper Acceptance

4 papers accepted to ICLR 2025, 5 papers accepted to NAACL 2025, 1 paper accepted to ICRA 2025, 1 paper accepted to TKDD 2025, and 1 paper accepted to AISTATS 2025.

Dec. 2024

Paper Acceptance

2 papers accepted to AAAI 2025.

Oct. 2024

Student Highlights

We are thrilled to announce that 7 exceptional students from our group are on the job market this year, ready to bring their expertise to new frontiers! 6 of them are pursuing industrial roles, while 1 will be on the academic job market looking for postdoctoral and faculty positions.

These talented researchers work on cutting-edge areas including AI Security, AI Agents, Alignment, Ethics, Fairness, and Responsible AI. They have deep experience with Generative AI, LLMs, VLMs, and VLAs, driving innovations in Weak-to-Strong Generalization and AI safety.

Don't miss this opportunity to connect with these future leaders in AI. Check out their profiles here!

Graduating Lab Members
Oct. 2024

Presenter

I gave a keynote at the New York Academy of Sciences, 15th Annual Machine Learning Symposium, "Towards Generative AI Security: An Interplay of Stress-Testing and Alignment"

Event website
Sep. 2024

Paper Acceptance

5 papers accepted to NeurIPS 2024 the Main Track and 1 paper accepted to Datasets and Benchmarks Track.

A post introducing these accepted papers.
Sep. - Nov. 2024

Competition Organizer

Are invisible watermarks in AI-generated content truly effective in distinguishing AI-generated images from real ones? We’re hosting a NeurIPS competition "Erasing the Invisible: A Stress-Test Challenge for Image Watermarks" to stress-test these watermarks, and we want you to put them to the test! Here’s how it works: we provide watermarked images, and your task is to remove the watermarks. If your approach outperforms the rest, you’ll win a prize and the chance to present your work at our NeurIPS workshop! Spread the word and join the challenge!

Website: https://erasinginvisible.github.io/

For an in-depth look at what goes on behind the scenes of organizing this competition, check out our blog post.
Sep. 2024

Paper Acceptance

2 papers accepted to EMNLP 2024 Findings.

Jul. 2024

Paper Acceptance

2 papers accepted to the first conference on Language Modeling, COLM 2024.

Jun. 2024

Presenter

I gave a keynote at the 1st CVPR Workshop on Dataset Distillation, titled "Advancing AI with Data-Centric Strategies: Boosting Efficiency, Generalization, and Trust".

Workshop website
May 2024

Career

I am thrilled to announce my promotion to Associate Professor with tenure in the Department of Computer Science at the University of Maryland, effective July 1, 2024. I am deeply grateful to my mentors, collaborators, and students for their unwavering support and encouragement throughout this journey.

May 2024

Presenter

I gave a talk at AI Community of Practice Seminar at the U.S. Securities and Exchange Commission (SEC) , titled "Integrity in AI: Multi-Modality Approaches to Combat Misinformation for Content Authenticity"

May 2024

Paper Acceptance

2 papers accepted to main conference at ACL 2024.

May 2024

AskScience AMA Series

I participated in a Reddit AMA (Ask Me Anything) session on May 14 from 2-4 p.m. Eastern Time, where I answered questions about AI and machine learning.

Link Here
May 2024

Paper Acceptance

9 papers accepted to main conference at ICML 2024.

Mar. 2024

Presenter

I gave a talk at Qualcomm AI Security Lecture Series, March 8, 2024. The talk title is "Invisible Foes: Crafting and Cracking AI in the Shadows of Language -- Poison Finetuning Data and Jailbreak Prompts for LLMs".

Feb. 2024

Presenter

I gave a seminar talk at Values-Centered Artificial Intelligence (VCAI) seminar series, College Park, MD, Feb 1, 2024. The talk title is "Algorithmic Fairness in an Ever-Changing World".

Link to the Talk Post
Jan. 2024

New Benchmark Paper

Our Mementos benchmark on testing sequential reasoning capabilities of multimodal large language models on image sequences is out. Find the arXiv, github code and data, visualization and leaderboard links at the project page: https://mementos-bench.github.io/. Jan., 2024.

Post on Social Media
Jan. 2024

New Benchmark Paper

Our WAVES benchmark on stress-testing image watermarks is out. Find the arXiv, github code, Hugging Face data, visualization and leaderboard links at the project page: https://wavesbench.github.io/. Jan., 2024.

Post on Social Media
Jan. 2024

Paper Acceptance

10 papers accepted to the main conference of ICLR 2024, 2 as spotlights 8 as posters. For more details, click on the Post on Social Media.

Post on Social Media
Jan. 2024

Organizer

Chair and organizer of NSF-Amazon Fairness in AI Principle Investigator Meeting, Jan 9-10, 2024.

Post on Social Media

Selected Publications

TraceVLA: Visual Trace Prompting Enhances Spatial-Temporal Awareness for Generalist Robotic Policies

The Thirteenth International Conference on Learning Representations (ICLR), 2025.
Zheng, Ruijie, Yongyuan Liang, Shuaiyi Huang, Jianfeng Gao, Hal Daum ́e III, Andrey Kolobov, Furong Huang, and Jianwei Yang.

GenARM: Reward Guided Generation with Autoregressive Reward Model for Test-Time Alignment

The Thirteenth International Conference on Learning Representations (ICLR), 2025.
Xu, Yuancheng, Udari Madhushani Sehwag, Alec Koppel, Sicheng Zhu, Bang An, Furong Huang, and Sumitra Ganesh.

Collab: Controlled Decoding using Mixture of Agents for LLM Alignment

The Thirteenth International Conference on Learning Representations (ICLR), 2025.
Chakraborty, Souradip, Sujay Bhatt, Udari Madhushani Sehwag, Soumya Suvra Ghosal, Jia- hao Qiu, Mengdi Wang, Dinesh Manocha, Furong Huang, Alec Koppel, and Sumitra Ganesh.

Benchmarking Vision Language Model Unlearning via Fictitious Facial Identity Dataset

The Thirteenth International Conference on Learning Representations (ICLR), 2025.
Yingzi Ma, Jiongxiao Wang, Fei Wang, Siyuan Ma, Jiazhao Li, Jinsheng Pan, Xiujun Li, Furong Huang, Lichao Sun, Bo Li, Yejin Choi, Muhao Chen, and Chaowei Xiao.

World Models with Hints of Large Language Models for Goal Achieving

The 2025 Annual Conference of the Nations of the Americas Chapter of the ACL (NAACL), 2025.
Liu, Zeyuan, Maggie Z. Huan, Xiyao Wang, Jiafei Lyu, Jian Tao, Xiu Li, Furong Huang, and Huazhe Xu.

PoisonedParrot: Subtle Data Poisoning Attacks to Elicit Copyright-Infringing Content from Large Language Models

The 2025 Annual Conference of the Nations of the Americas Chapter of the ACL (NAACL), 2025.
Panaitescu-Liess, Michael-Andrei, Pankayaraj Pathmanathan, Yigitcan Kaya, Zora Che, Bang An, Sicheng Zhu, Aakriti Agrawal, Furong Huang.

Large Language Models and Causal Inference in Collaboration: A Comprehensive Survey

The 2025 Annual Conference of the Nations of the Americas Chapter of the ACL (NAACL), 2025.
Liu, Xiaoyu, Paiheng Xu, Junda Wu, Jiaxin Yuan, Yifan Yang, Yuhang Zhou, Fuxiao Liu, Tianrui Guan, Haoliang Wang, Tong Yu, Julian McAuley, Wei Ai, Furong Huang.

Enhancing Visual-Language Modality Alignment in Large Vision Language Models via Self-Improvement

The 2025 Annual Conference of the Nations of the Americas Chapter of the ACL (NAACL), 2025.
Wang, Xiyao, Jiuhai Chen, Zhaoyang Wang, Yuhang Zhou, Yiyang Zhou, Huaxiu Yao, Tianyi Zhou, Tom Goldstein, Parminder Bhatia, Taha Kass-Hout, Furong Huang, Cao Xiao.

MergeME: Model Merging Techniques for Homogeneous and Heterogeneous MoEs

The 2025 Annual Conference of the Nations of the Americas Chapter of the ACL (NAACL), 2025.
Zhou, Yuhang, Giannis Karamanolakis, Victor Soto, Anna Rumshisky, Mayank Kulkarni, Furong Huang, Wei Ai, Jianhua Lu.

Statistical Guarantees for Lifelong Reinforcement Learning using PAC-Bayesian Theory

The 28th International Conference on Artificial Intelligence and Statistics (AISTATS), 2025.
Zhang, Zhi, Chris Chow, Yasi Zhang, Yanchao Sun, Haochen Zhang, Eric Hanchen Jiang, Han Liu, Furong Huang, Yuchen Cui, and Oscar Hernan Madrid Padilla.

GFairHint: Improving Individual Fairness for Graph Neural Networks via Fairness Hint

ACM Transactions on Knowledge Discovery from Data (TKDD), 2025.
Xu, Paiheng, Yuhang Zhou, Bang An, Wei Ai, and Furong Huang.

Safety Guaranteed Robust Multi-Agent Reinforcement Learning with Hierarchical Control for Connected and Automated Vehicles

IEEE International Conference on Robotics and Automation (ICRA), 2025.
Zhang, Zhili, H M Sabbir Ahmad, Ehsan Sabouni, Yanchao Sun, Furong Huang, Wenchao Li, and Fei Miao.

Is poisoning a real threat to DPO? Maybe more so than you think.

AAAI 2025 AI Alignment Track (AAAI), 2025.
Pathmanathan, Pankayaraj, Souradip Chakraborty, Xiangyu Liu, Yongyuan Liang, and Furong Huang.

Can Watermarking Large Language Models Prevent Copyrighted Text Generation and Hide Training Data?

The 39th Annual AAAI Conference on Artificial Intelligence (AAAI), 2025.
Panaitescu-Liess, Michael-Andrei, Zora Che, Bang An, Yuancheng Xu, Pankayaraj Path- manathan, Souradip Chakraborty, Sicheng Zhu, Tom Goldstein, and Furong Huang.

Easy2Hard-Bench: Standardized Difficulty Labels for Profiling LLM Performance and Generalization

The Thirty-eighth Annual Conference on Neural Information Processing Systems (NeurIPS) Datasets and Benchmarks Track, 2024.
Ding, Mucong, Chenghao Deng, Jocelyn Choo, Zichu Wu, Aakriti Agrawal, Avi Schwarzschild, Tianyi Zhou, Tom Goldstein, John Langford, Anima Anandkumar, and Furong Huang.
Publisher's website

Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models

The Thirty-eighth Annual Conference on Neural Information Processing Systems (NeurIPS), 2024.
Xu, Yuancheng, Jiarui Yao, Manli Shu, Yanchao Sun, Zichu Wu, Ning Yu, Tom Goldstein, and Furong Huang.
Publisher's website

Transfer Q-star: Principled Decoding for LLM Alignment

The Thirty-eighth Annual Conference on Neural Information Processing Systems (NeurIPS), 2024.
Chakraborty, Souradip, Soumya Suvra Ghosal, Ming Yin, Dinesh Manocha, Mengdi Wang, Amrit Bedi, and Furong Huang.
Publisher's website

ACT or Fiction: Can Truthful Mechanisms Eliminate Federated Free Riding?

The Thirty-eighth Annual Conference on Neural Information Processing Systems (NeurIPS), 2024.
Bornstein, Marco, Amrit Bedi, Abdirisak Mohamed, and Furong Huang.
Publisher's website

Make-An-Agent: A Generalizable Policy Network Generator with Behavior-Prompted Diffusion

The Thirty-eighth Annual Conference on Neural Information Processing Systems (NeurIPS), 2024.
Liang, Yongyuan, Tingqiang Xu, Kaizhe Hu, Guangqi Jiang, Furong Huang, and Huazhe Xu.
Publisher's website

Boosting Sample Efficiency and Generalization in Multi-agent Reinforcement Learning via Equivariance

The Thirty-eighth Annual Conference on Neural Information Processing Systems (NeurIPS), 2024.
McClellan, Joshua, Naveed Haghani, John Winder, Furong Huang, and Pratap Tokekar.
Publisher's website

Selected Awards

MIT TR35

MIT Technology Review Innovators Under 35 Asia Pacific 2022

Visionaries

She makes AI more trustworthy by developing models that can perform tasks safely and efficiently in unseen environments without human oversight.

AI Researcher of the Year

Finalist of AI in Research – AI researcher of the year, 2022 Women in AI Awards North America.

 

Special Jury Recognition – United States, 2022 Women in AI Awards North America.

National Science Foundation Awards

National Artificial Intelligence Research Resource (NAIRR) Pilot Awardee.

NSF Computer and Information Science and Engineering (CISE) Research Initiation Initiative (CRII).

NSF Div Of Information & Intelligent Systems (IIS) Direct For CISE, “FAI: Toward Fair Decision Making and Resource Allocation with Application to AI-Assisted Graduate Admission and Degree Completion.”

Industrial Faculty Research Awards

Microsoft Accelerate Foundation Models Research Award 2023.

JP Morgan Faculty Research Award 2022.

JP Morgan Faculty Research Award 2020.

JP Morgan Faculty Research Award 2019.

Adobe Faculty Research Award 2017.

Research Projects

My research stands at the forefront, focusing on robustness, efficiency, and fairness in AI/ML models, vital in fostering an era of Trustworthy AI that society can rely on. My research fortifies models against spurious features, adversarial perturbations, and distribution shifts, enhances model, data, and learning efficiency, and ensures long-term fairness under distribution shifts.

With academic and industrial collaborators, my research has been used for cataloguing brain cell types, learning human disease hierarchy, designing non-addictive pain killers, controlling power-grid for resiliency, defending against adversarial entities in financial markets, updating/finetuning industrial-scale model efficiently and etc.

Specific Area of Research

Click Below

Contact Me

furongh at cs.umd.edu
301.405.8010
furong-huang.com

4124 The Brendan Iribe Center
Department of Computer Science
Center for Machine Learning
University of Maryland
College Park, MD 20740