Year of 2024
- Our WAVES benchmark on stress-testing image watermarks is out, here is the project page: https://wavesbench.github.io/ , Twitter (X): click here, arXiv: click here, github code: click here, Hugging Face data: click here, Leaderboard: click here, Visualization: click here.
- 10 papers accepted to the main conference of ICLR 2024, 2 as spotlights 8 as posters. For more details, see this Twitter (X) thread: click here.
- Chair and organizer of NSF-Amazon Fairness in AI Principle Investigator Meeting, Jan 9-10, 2024.
Year of 2023
- 4 papers accepted to the main conference of NeurIPS 2023 and 11 papers accepted to the workshops of NeurIPS 2023, 1 as oral, 2 as spotlights, and 8 as posters, Sep-Dec 2023.
- Depart Colloquium at University of Maryland, College Park, “Trustworthy Machine Learning in an Ever-Changing World”, Sep., 2023. Talk here.
- Panelist at Interactive Learning with Implicit Human Feedback workshop, ICML, Jul., 2023.
- Keynote speaker at ROADS to Mega-AI Models Workshop, “Efficient Machine Learning at the Edge”, MLSys, Jun., 2023.
- Invited talk at the 3rd Workshop of Adversarial Machine Learning on Computer Vision: Art of Robustness, “Robust Reinforcement Learning in an Ever-Changing World”, CVPR, Jun., 2023.
- 3 papers accepted to the main conference and 7 accepted to the workshops at ICML 2023.
- 5 papers accepted to main conference at ICLR 2023 (1 of which as a spotlight oral presentation), see this thread of twitter threads for an introduction of these works. In addition, 2 papers accepted to ICLR workshops 2023.
- Invited talk on “Adaptable Reinforcement Learning in An Ever-Changing World” at the the Reincarnating Reinforcement Learning workshop at ICLR 2023. See a recording of the talk here.
- Panelist at Reincarnating RL workshop, ICLR. May., 2023.
- Co-organizer of NSF-IEEE workshop: Toward Explainable, reliable, and sustainable machine learning in signal & data science, “Trustworthy Machine Learning in Complex Environments”, Mar. 2023.
- Invited talk at 57th Annual Conference on Information Science and Systems, CISS, “Efficient Machine Learning at the Edge in Parallel”, Mar., 2023.
- Invited talk at 2023 Information Theory and ApplicationsWorkshop, ITA, “Trustworthy Machine Learning in Complex Environments”, Feb., 2023.
- Invited talk at UTSA Matrix AI Seminar, “Trustworthy Machine Learning in Complex Environments”, Jan., 2023.
Year of 2022
- Our work “Controllable Attack and Improved Adversarial Training in Multi-Agent Reinforcement Learning” got the Outstanding Paper Award at the Trustworthy and Socially Responsible Machine Learning (TSRML) NeurIPS2022 workshop, Dec 9 2022. Congratulations to my students Xiangyu Liu and Souradip Chakraborty. Project page: link.
- 6 papers accepted to main conference and 9 papers to workshops at NeurIPS 2022. Please see the post here for details.
- Paper accepted to AAAI-23.
- Paper accepted to Third Workshop on Seeking Low‑Dimensionality in Deep Neural Networks (SLowDNN) 2023.
- Paper selected as spotlight at the Deep Reinforcement Learning workshop at NeurIPS 2022.
- Paper accepted to NeurIPS 2022 ML Safety workshop.
- 2 papers accepted to the workshop on Trustworthy and Socially Responsible Machine Learning (TSRML) at NeurIPS 2022.
- Paper accepted to NeurIPS 2022 Workshop on Score-Based Methods.
- Paper accepted to Foundation Models for Decision Making Workshop at NeurIPS 2022.
- Paper selected as oral presentation (top 9%) at FL-NeurIPS 2022.
- 2 papers accepted to NeurIPS 2022 GLFrontiers Workshop.
- 5 papers accepted to ICML 2022: one accepted to the main conference (paper 1), one accepted for a spotlight presentation out of 3 in total at the workshop of Decision Awareness in Reinforcement Learning (DARL) (paper 2), two accepted to the Responsible Decision Making in Dynamic Environments Workshop (paper 3, paper 4), one accepted to the the ICML workshop for Continuous Time Methods for Machine Learning (paper 5).
- 4 papers accepted to ICLR 2022, 3 papers as posters (paper 1, paper 2 and paper 3) and 1 as a spotlight (paper 4).
Year of 2021
- Our work “Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion Attacks in Deep RL” got the Best Paper Award at the SafeRL NeurIPS2021 workshop, Dec 13 2021. Congratulations to my students Yanchao Sun, Ruijie Zheng and Yongyuan Liang. Project page: link.
- Furong gave a talk “Learning Decision Making Systems under (Adversarial) Distribution Shifts” at the UMD CS department seminar, Oct 22 2021. A recording of the talk is available here.
- One paper accepted as a spotlight to Distribution Shifts: Connecting Methods and Applications (DistShift) workshop NeurIPS 2021.
- 2 papers accepted as oral to Safe and Robust Control of Uncertain Systems (SafeRL) workshop NeurIPS 2021.
- 2 papers accepted to Deep Reinforcement Learning workshop NeurIPS 2021.
- 3 papers accepted to NeurIPS 2021.