Hey, I'm Winnie

Let's reflect on the latest scientific research, challenge our assumptions when designing implementation, and use technology to empower expansive reformation.

My research lies at the intersection of generative models, information retrieval, and natural language understanding. My goal is to build interpretable and scalable generative systems through the incorporation of useful inductive biases, compression, and powerful learning / inference schemes. Towards this, I explore ways to inject classical algorithms with differentiable components, build efficient methods exploiting intermediate representations, and improve reasoning capabilities of large language models via modularity.

I'm fortunate to have learned from many influential mentors at the early stages of my career. Most recently, I was a Student Researcher at Facebook AI Research (Meta AI), and collaborated closely with Stefano Ermon and friends at Stanford StasML. In my undergrad, I collaborated on various projects at Google Brain with Igor Mordatch (Brain Robotics) / David Dohan (Generative Models) / Durk Kingma.

I studied Artifical Intelligence Computer Science, Statistics and Math at the University of Toronto. My AI research journey began by scaling methods in latent variable modeling and probabilistic inference with David Duvenaud (Vector Institute). I hope to one day bring similar influence with my ideas at the cutting edge.

 /   /   /   /   /   / 

2019 xwinxu's Github chart

Winnie Xu 2022.