We introduce algorithms to efficiently identify the top-ranked NLG system with few pairwise human annotations.
We propose a modified LSTM cell with a diversity-driven training objective to improve the transparency and explainability of attention models
We propose Refine Networks, a framework which mimics the human process of generating any piece of text by first creating an initial draft and then refining it.