Loss as the Inconsistency of a Probabilistic Dependency Graph: Choose Your Model, Not Your Loss Function (via Zoom)

Abstract: In a world blessed with a great diversity of loss functions, I argue that that choice between them is not a matter of taste or pragmatics, but of model.
Probabilistic dependency graphs (PDGs) are a very expressive class of probabilistic graphical models that comes equipped with a natural measure of inconsistency.
The central finding of this work is that many standard loss functions can be viewed as measuring the inconsistency of the PDG that describes the scenario at hand.
As a byproduct of this approach, we obtain an intuitive visual calculus for deriving inequalities between loss functions. In addition to variants of cross entropy, a large class of statistical divergences can be expressed as inconsistencies, from which we can derive visual proofs of properties such as the information processing inequality.  We can also use the approach to justify a well-known connection between regularizers and priors.  In variational inference, we find that the ELBO, a somewhat opaque objective for latent variable models, and variants of it arise for free out of uncontroversial modeling assumptions -- as do simple graphical proofs of their corresponding bounds.  Based on my AISTATS 22 paper.

Bio: Oliver is a PhD student in Computer Science at Cornell, advised by Joseph Halpern. His research focuses on modeling computations with internally conflicted agents, but his research interests range broadly from machine learning to applied category theory. Previously, Oliver did a masters in computer science at the University of Cambridge, and undergraduate degrees in mathematics, biology, and computer science at the University of Utah.