Avatar

I'm a second year CS PhD student at University of Maryland, College Park. I'm advised by Professor Furong Huang. I research large model safety, algorithmic fairness and mechanism design.

I studied math and computer science at Boston University, where I got to work with Professor Bryan Plummer on misinformation detection and Professor Indara Suarez on anomaly detection for the CMS detector at CERN.

I organized for Impact Labs, a nonprofit supporting students to have greater access to opportunities at the intersection of technology and social good.


[email]    [twitter]

Research

Model Manipulation Attacks Enable More Rigorous Evaluations of LLM Unlearning
Zora Che*, Stephen Casper*, Anirudh Satheesh, Rohit Gandikota, Domenic Rosati, Stewart Slocum, Lev McKinney, Zichu Wu, Zikui Cai, Bilai Chughtai, Furong Huang, Dylan Hadfield-Menell.
Safe Generative AI Workshop at NeurIPS 2024

EnsemW2S: Can an Ensemble of SoTA LLMs be Leveraged to Obtain a Stronger LLM?
Aakriti Agrawal, Mucong Ding, Zora Che, Chenghao Deng, Anirudh Satheesh, John Langford, Furong Huang.
Safe Generative AI Workshop at NeurIPS 2024

PoisonedParrot: Subtle Data Poisoning Attacks to Elicit Copyright-Infringing Content from Large Language Models
Michael-Andrei Panaitescu-Liess, Pankayaraj Pathmanathan, Yigitcan Kaya, Zora Che, Bang An, Sicheng Zhu, Aakriti Agrawal, Furong Huang.
Safe Generative AI Workshop at NeurIPS 2024

Can Watermarking Large Language Models Prevent Copyrighted Text Generation and Hide Training Data?
Michael-Andrei Panaitescu-Liess, Zora Che, Bang An, Yuancheng Xu, Pankayaraj Pathmanathan, Souradip Chakraborty, Sicheng Zhu, Tom Goldstein, Furong Huang.
AAAI 2024
AdvML-Frontiers Workshop at NeurIPS 2024 Best Paper Award
Workshop on the Next Generation of AI Safety at ICML 2024

SAIL: Self-improving Efficient Online Alignment of Large Language Models
Mucong Ding, Souradip Chakraborty, Vibhu Agrawal, Zora Che, Alec Koppel, Mengdi Wang, Amrit Bedi, Furong Huang.
Workshop on Theoretical Foundations of Foundation Models at ICML 2024 [paper]

Transferring Fairness under Distribution Shifts via Fair Consistency Regularization
Bang An, Zora Che, Mucong Ding, Furong Huang.
NeurIPS 2022
Socially Responsible Machine Learning Workshop at ICLR 2022 [paper]

Presentation

"AutoDQM and RPC Monitoring" for the Development for Monitoring the Muon System and ML Applications Segment, CMS Week at CERN, 2020

Other

I am a multimedia artist. These days I've been interested in installations and collaborative pieces. [portfolio]

I'm grateful to have been supported by the Goldwater scholarship and the New York Times College Scholarship.

{this site was last watered on oct 17 2024}