Getting to Production with Few-shot Natural Language Generation Models

Abstract
In this paper, we study the utilization of pretrained language models to enable few-shot Natural Language Generation (NLG) in task-oriented dialog systems. We introduce a system consisting of iterative self-training and an extensible mini-template framework that textualizes the structured input data into semi-natural text to fully take advantage of pre-trained language models. We