Skip to page Content.

Three invited talks by Sam Bowman

This term, Sam Bowman gave invited talks at SUNY Stony Brook (Linguistics colloquium), The University of Maryland (CLIP Colloquium), and Columbia University (NLP Speaker Series), all featuring the title "Sentence Understanding with Neural Networks and Natural Language Inference." An abstract follows, and Sam would be happy to share slides privately.

Artificial neural network models for language understanding problems represent an increasingly large and increasingly successful thread of research within natural language processing. When developing these models in typical settings, though, it can be difficult to identify the degree to which they capture the meanings of natural language sentences, and correspondingly difficult to identify research directions that are likely to yield progress on the underlying language understanding problem.

In this talk, I introduce natural language inference, the task of judging whether one sentence is true or false given that some other sentence is true, and argue that that task is distinctly effective as a means of developing and evaluating sentence understanding models in NLP. In this three sections, I’ll first introduce the task and the Stanford NLI corpus (SNLI, EMNLP ‘15), present the Stack-Augmented Parser-Interpreter Neural Network (SPINN, ACL ‘16), a model developed on that corpus, and finally introduce a new corpus-building effort and shared task competition called MultiNLI.