Measuring Systematic Generalization in Neural Proof Generation with Transformers
November 27, 2020 By: Nicolas Gontier, Koustuv Sinha, Siva Reddy, Christopher Pal Abstract We are interested in understanding how well Transformer language models (TLMs) can perform reasoning tasks when trained on knowledge encoded in the form of natural language. We investigate systematic generalization abilities on an inductive logical reasoning task in natural language, which involves reasoning over relationships between entities grounded in first-order logical proofs. Specifically, we perform soft theorem-proving by leveraging TLMs to generate logical proofs represented in natural […]
Read more