首页> 外文期刊>KI - Künstliche Intelligenz >Symbol Grounding as Social, Situated Construction of Meaning in Human-Robot Interaction
【24h】

Symbol Grounding as Social, Situated Construction of Meaning in Human-Robot Interaction

机译:作为人机交互中社会性,情境化意义建构的符号基础

获取原文
获取原文并翻译 | 示例
           

摘要

The paper views the issue of “symbol grounding” from the viewpoint of the construction of meaning between humans and robots, in the context of a collaborative activity. This concerns a core aspect of the formation of common ground: The construction of meaning between actors as a conceptual representation which is believed to be mutually understood as referring to a particular aspect of reality. The problem in this construction is that experience is inherently subjective—and more specifically, humans and robots experience and understand reality fundamentally differently. There is an inherent asymmetry between the actors involved. The paper focuses on how this asymmetry can be reflected logically, and particularly in the underlying model theory. The point is to make it possible for a robot to reason explicitly both about such asymmetry in understanding, consider possibilities for alignment to deal with it, and establish (from its viewpoint) a level of intersubjective or mutual understanding. Key to the approach taken in the paper is to consider conceptual representations to be formulas over propositions which are based in proofs, as reasoned explanations of experience. This shifts the focus from a notion of “truth” to a notion of judgment—judgments which can be subjectively right and still intersubjectively wrong (faultless disagreement), and which can evolve over time (updates, revision). The result is an approach which accommodates both asymmetric agency and social sentience, modelling symbol grounding in human-robot interaction as social, situated construction over time.
机译:本文从协作活动的背景下,从人与机器人之间的意义建构的角度出发,探讨了“符号基础”问题。这涉及形成共同点的一个核心方面:行为者之间的意义建构是一种概念表示,被认为可以相互理解为是指现实的特定方面。这种结构的问题在于,经验本质上是主观的,更具体地说,人类和机器人从根本上不同地体验和理解现实。所涉及的参与者之间存在固有的不对称性。本文着重于如何在逻辑上反映这种不对称性,特别是在基础模型理论中。关键是使机器人有可能明确地理解理解中的这种不对称性,考虑对齐的可能性以应对它,并(从其观点出发)建立主体间或相互理解的水平。本文采用的方法的关键是将概念表示视为对命题的公式,这些命题基于证明,作为对经验的合理解释。这将焦点从“真相”概念转移到判断概念,这些判断主观上是正确的,但主体间仍然是错误的(无误分歧),并且随着时间的推移会不断发展(更新,修订)。结果是一种既适应不对称代理又适应社会情感的方法,将人机交互中的符号基础建模为随着时间推移而进行的社会定位。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号