Hi!
I am a Senior ML scientist at Prescient Design in Genentech. What interests me most about machine learning models is not their strong predictive power but rather the why. I often find myself asking questions like, Why did the model make this prediction? or What is the model learning? So, I do interpretability research :) I try to make neural networks more interpretable. I think understanding machine learning models better will inevitably lead us to build stronger, more powerful models. I have published over a dozen papers in top ML conferences including NeurIPs and ICLR. I received my PhD at the University of Maryland, where I was advised by Hector Corrada Bravo and Soheil Feizi.
If you are interested in my work, want to collaborate, or just want to talk about ML and interpretability, please feel free to contact me.