首页>
外国专利>
JOINT MANY-TASK NEURAL NETWORK MODEL FOR MULTIPLE NATURAL LANGUAGE PROCESSING (NLP) TASKS
JOINT MANY-TASK NEURAL NETWORK MODEL FOR MULTIPLE NATURAL LANGUAGE PROCESSING (NLP) TASKS
展开▼
机译:多种自然语言处理(NLP)任务的联合多任务神经网络模型
展开▼
页面导航
摘要
著录项
相似文献
摘要
The technology disclosed provides a so-called “joint many-task neural network model” to solve a variety of increasingly complex natural language processing (NLP) tasks using growing depth of layers in a single end-to-end model. The model is successively trained by considering linguistic hierarchies, directly connecting word representations to all model layers, explicitly using predictions in lower tasks, and applying a so-called “successive regularization” technique to prevent catastrophic forgetting. Three examples of lower level model layers are part-of-speech (POS) tagging layer, chunking layer, and dependency parsing layer. Two examples of higher level model layers are semantic relatedness layer and textual entailment layer. The model achieves the state-of-the-art results on chunking, dependency parsing, semantic relatedness and textual entailment.
展开▼