Learning Structured Natural Language Representations for Semantic Parsing

Jianpeng Cheng, Siva Reddy, V. Saraswat, Mirella Lapata

ACL

Abstract

We introduce a neural semantic parser that converts natural language utterances to intermediate representations in the form of predicate-argument structures, which are induced with a transition system and subsequently mapped to target domains. The semantic parser is trained end-to-end using annotated logical forms or their denotations. We obtain competitive results on various datasets. The induced predicate-argument structures shed light on the types of representations useful for semantic parsing and how these are different from linguistically motivated ones.