Source Themes

Active Evaluation: Efficient NLG Evaluation with Few Pairwise Comparisons

We introduce algorithms to efficiently identify the top-ranked NLG system with few pairwise human annotations.

Towards Transparent and Explainable Attention Models

We propose a modified LSTM cell with a diversity-driven training objective to improve the transparency and explainability of attention models

Let’s Ask Again: Refine Network for Automatic Question Generation

We propose Refine Networks, a framework which mimics the human process of generating any piece of text by first creating an initial draft and then refining it.