David Dohan, Winnie Xu, Aitor Lewkowycz, Jacob Austin, David Bieber, Raphael Gontijo Lopes, Yuhuai Wu, Henryk Michalewski, Rif A. Saurous, Jascha Sohl-dickstein, Kevin Murphy, Charles Sutton

Modern language models do in-context learning and show amazing abilities to zero-shot generalize to unseen conditions. We represent compositions of prompted language models as probabilistic programs termed Model Cascades. With this general framework for sampling and inference, we formalize prompting, reasoning, and tool use as graphical models over random variables with complex string values.

Beyond Bayes: Paths Towards Universal Reasoning Systems @ ICML, 2022 [Spotlight].

Web | Paper | Code