Fahim Tajwar
I am an incoming PhD Student at the Machine Learning Department of
Carnegie Mellon University.
Previously, I obtained my BS (with distinction, Mathemetics) in 2022 and MS (Computer Science) in 2023 from
Stanford University.
There I am grateful to have my research supervised by Prof. Chelsea Finn.
I am also fortunate to have worked with
Prof. Percy Liang,
Prof. Stefano Ermon,
and Prof. Stephen Luby during my time at Stanford.
Feel free to reach out to me in case you have any questions or want to chat about my work!
Email  / 
CV  / 
Google Scholar  / 
Github  / 
LinkedIn
|
|
Conference/Journal Publications
|
Surgical Fine-Tuning Improves Adaptation to Distribution Shifts
Yoonho Lee*,
Annie S Chen*,
Fahim Tajwar,
Ananya Kumar,
Huaxiu Yao,
Percy Liang, and
Chelsea Finn
International Conference on Learning Representations (ICLR), 2023
[Paper],
[Code]
|
When to Ask for Help: Proactive Interventions in Autonomous Reinforcement Learning
Annie Xie*,
Fahim Tajwar*,
Archit Sharma*, and
Chelsea Finn
Neural Information Processing Systems (NeurIPS), 2022
[Paper],
[Code],
[Project Website]
|
Do Deep Networks Transfer Invariances Across Classes?
Allan Zhou*,
Fahim Tajwar*,
Alexander Robey,
Tom Knowles,
George J Pappas,
Hamed Hassani, and
Chelsea Finn
International Conference on Learning Representations (ICLR), 2022
[Paper],
[Code]
|
Scalable deep learning to identify brick kilns and aid regulatory capacity
Jihyeon Lee*,
Nina R. Brooks*,
Fahim Tajwar,
Marshall Burke,
Stefano Ermon,
David B. Lobell,
Debashish Biswas, and
Stephen Luby
Proceedings of the National Academy of Sciences (PNAS), 2021
[Paper],
[Code]
|
Preprints/Workshop Publications
|
Conservative Prediction via Data-Driven Confidence Minimization
Caroline Choi*,
Fahim Tajwar*,
Yoonho Lee*,
Huaxiu Yao,
Ananya Kumar, and
Chelsea Finn
ICLR workshops: TrustML, ME-FoMo, 2023
[Paper],
[Code]
|
No True State-of-the-Art? OOD Detection Methods are Inconsistent across Datasets
Fahim Tajwar,
Ananya Kumar*,
Sang Michael Xie*, and
Percy Liang
ICML Workshop on Uncertainty & Robustness in Deep Learning (UDL), 2021
[Paper],
[Code]
|
Website template
|