In this paper, we propose a new task for as sessing the quality of natural language ar guments. The premises of a well-reasoned argument should provide enough evidence for accepting or rejecting its claim. Al though this criterion, known as sufficiency, is widely adopted in argumentation theory, there are no empirical studies on its appli cability to real arguments. In this work, we show that human annotators substan tially agree on the sufficiency criterion and introduce a novel annotated corpus. Fur thermore, we experiment with feature-rich SVMs and convolutional neural networks and achieve 84% accuracy for automati cally identifying insufficiently supported arguments. The final corpus as well as the annotation guideline are freely avail able for encouraging future research on ar gument quality.
展开▼