zora che

I'm a researcher, technologist, and artist.

I am a PhD student at University of Maryland, and a Cooperative AI PhD Fellow. My research on large model safety, technical governance and fairness asks the question of how best to align technology.

My art has been exhibited at Chashama Open Studios and Gray Area, and my collaborative mural was on view at MoMA PS1. My writing has appeared in Kernel Magazine.

I graduated from Boston University with a B.A. in Mathematics and a B.A. in Computer Science, summa cum laude. I wrote my honors thesis on multi-modal misinformation detection, worked on anomaly detection for the CMS experiment at CERN, and researched algorithmic fairness as an NSF-REU scholar.


✧ Research ✧

Can Watermarking Large Language Models Prevent Copyrighted Text Generation and Hide Training Data?
Panaitescu-Liess, M-A., Che, Z., An, B., Xu, Y., Pathmanathan, P., Chakraborty, S., Zhu, S., Goldstein, T., Huang, F.
AAAI 2025, AdvML-Frontiers Workshop at NeurIPS 2024 Best Paper Award

Model Tampering Attacks Enable More Rigorous Evaluations of LLM Capabilities
Che, Z.,* Casper, S.,* Kirk, R., Satheesh, A., Slocum, S., McKinney, L. E., … & Hadfield-Menell, D.
Safe Generative AI Workshop at NeurIPS 2024

EnsemW2S: Can an Ensemble of SoTA LLMs be Leveraged to Obtain a Stronger LLM?
Agrawal, A., Ding, M., Che, Z., Deng, C., Satheesh, A., Langford, J., & Huang, F.
Safe Generative AI Workshop at NeurIPS 2024

PoisonedParrot: Subtle Data Poisoning Attacks to Elicit Copyright-Infringing Content from Large Language Models
Panaitescu-Liess, M-A., Pathmanathan, P., Kaya, Y., Che, Z., An, B., Zhu, S., Agrawal, A., & Huang, F.
Safe Generative AI Workshop at NeurIPS 2024

SAIL: Self-improving Efficient Online Alignment of Large Language Models
Ding, M., Chakraborty, S., Agrawal, V., Che, Z., Koppel, A., Wang, M., Bedi, A., & Huang, F.
Workshop on Theoretical Foundations of Foundation Models at ICML 2024 [paper]

Transferring Fairness under Distribution Shifts via Fair Consistency Regularization
An, B., Che, Z., Ding, M., & Huang, F.
NeurIPS 2022, Socially Responsible Machine Learning Workshop at ICLR 2022


✧ Presentation ✧
"AutoDQM and RPC Monitoring" for the Development for Monitoring the Muon System and ML Applications Segment, CMS Week at CERN, 2020


✧ Art ✧
I create with paint, words, code, and hardware. My portfolio. Select credits:
  ✧ vending for love and memory.giving at Fidget Camp Showcase, San Francisco, CA
  ✧ Collaborative Mural for Rirkrit Tiravanija: A LOT OF PEOPLE at MoMA PS1, New York, NY
  ✧ packing for an unknown future at Gray Area Artist Showcase, Gray Area, San Francisco, CA
  ✧ Entropy at Scholastic Art & Writing Awards, The Metropolitan Museum of Art, New York, NY
  ✧ Oxytocin at Chashama Open Studios, Brooklyn Army Terminal, New York, NY
  ✧ Oracle in the Machine in Kernel Magazine Issue 4: LUCK


✧ Service and Non-profit Organizing✧
I was previously the Summit Director and Coalition Manager for Impact Labs, a nonprofit supporting students to have greater access to opportunities at the intersection of technology and social good.
I lead the planning of Impact Summit 2023 in NYC, which you can watch the recordings here. As the Coalition Manager, I worked with non-profits and mission-driven organizations to match them with technical talents in 2023. This was a pilot project supported by Schmidt Futures.


My practice has been made possible by the Goldwater Scholarship, the New York Times College Scholarship, and the Cooperative AI Foundation.