About
Hi! I’m Irena, a second-year CS Ph.D. at Stanford University, advised by Carlos Guestrin. My research interests center on trustworthy machine learning, especially in the context of large models.
Previously, I also did my undergraduate at Stanford, where I studied Computer Science (B.S. with honors), Statistics (M.S.), and a bit of art history. Here’s a randomly sampled favorite artwork.
Very kind researchers have had an outsized impact on my career. If I can be helpful in your journey (e.g. by suggesting classes or chatting about grad school), feel free to email me at my first name [at] cs.stanford.edu.
Selected Publications
-
Model Equality Testing: Which model is this API serving?
Irena Gao,
Percy Liang,
and Carlos Guestrin
2024.
-
OpenFlamingo: An open-source framework for training large autoregressive vision-language models
Anas Awadalla*,
Irena Gao*,
Joshua Gardner,
Jack Hessel,
Yusuf Hanafy,
Wanrong Zhu,
Kalyani Marathe,
Yonatan Bitton,
Samir Gadre,
Shiori Sagawa,
Jenia Jitsev,
Simon Kornblith,
Pang Wei Koh,
Gabriel Ilharco,
Mitchell Wortsman,
and Ludwig Schmidt
2023.
-
Adaptive Testing of Computer Vision Models
Irena Gao,
Gabriel Ilharco,
Scott Lundberg,
and Marco Tulio Ribeiro
In International Conference on Computer Vision
2023.
(Oral Presentation)
-
Out-of-Domain Robustness via Targeted Augmentations
Irena Gao*,
Shiori Sagawa*,
Pang Wei Koh,
Tatsunori Hashimoto,
and Percy Liang
In International Conference on Machine Learning
2023.
-
Extending the WILDS benchmark for unsupervised adaptation
Shiori Sagawa*,
Pang Wei Koh*,
Tony Lee*,
Irena Gao*,
Sang Michael Xie,
Kendrick Shen,
Ananya Kumar,
Weihua Hu,
Michihiro Yasunaga,
Henrik Marklund,
Sara Beery,
Etienne David,
Ian Stavness,
Wei Guo,
Jure Leskovec,
Kate Saenko,
Tatsunori Hashimoto,
Sergey Levine,
Chelsea Finn,
and Percy Liang
In International Conference on Learning Representations
2022.
(Oral Presentation)
* denotes equal contribution.