首页>
外国专利>
Methods for Explainability of Deep-Learning Models
Methods for Explainability of Deep-Learning Models
展开▼
机译:深度学习模型的解释性方法
展开▼
页面导航
摘要
著录项
相似文献
摘要
Embodiments are disclosed for health assessment and diagnosis implemented in an artificial intelligence (AI) system. In an embodiment, a method comprises: feeding a first set of input features to the AI model; obtaining a first set of raw output predictions from the model; determining a first set of impact scores for the input features fed into the model; training a neural network with the first set of impact scores as input to the network and pre-determined sentences describing the model's behavior as output; feeding a second set of input features to the AI model; obtaining a second set of raw output predictions from the model; determining a second set of impact scores based on the second set of output predictions; feeding the second set of impact scores to the neural network; and generating a sentence describing the AI model's behavior on the second set of input features.
展开▼