首页> 外文会议>IEEE Symposium on Security and Privacy >Dangerous Skills: Understanding and Mitigating Security Risks of Voice-Controlled Third-Party Functions on Virtual Personal Assistant Systems
【24h】

Dangerous Skills: Understanding and Mitigating Security Risks of Voice-Controlled Third-Party Functions on Virtual Personal Assistant Systems

机译:危险技能:了解和减轻虚拟个人助理系统上语音控制的第三方功能的安全风险

获取原文

摘要

Virtual personal assistants (VPA) (e.g., Amazon Alexa and Google Assistant) today mostly rely on the voice channel to communicate with their users, which however is known to be vulnerable, lacking proper authentication (from the user to the VPA). A new authentication challenge, from the VPA service to the user, has emerged with the rapid growth of the VPA ecosystem, which allows a third party to publish a function (called skill) for the service and therefore can be exploited to spread malicious skills to a large audience during their interactions with smart speakers like Amazon Echo and Google Home. In this paper, we report a study that concludes such remote, large-scale attacks are indeed realistic. We discovered two new attacks: voice squatting in which the adversary exploits the way a skill is invoked (e.g., ``open capital one''), using a malicious skill with a similarly pronounced name (e.g., ``capital won'') or a paraphrased name (e.g., ``capital one please'') to hijack the voice command meant for a legitimate skill (e.g., ``capital one''), and voice masquerading in which a malicious skill impersonates the VPA service or a legitimate skill during the user's conversation with the service to steal her personal information. These attacks aim at the way VPAs work or the user's misconceptions about their functionalities, and are found to pose a realistic threat by our experiments (including user studies and real-world deployments) on Amazon Echo and Google Home. The significance of our findings has already been acknowledged by Amazon and Google, and further evidenced by the risky skills found on Alexa and Google markets by the new squatting detector we built. We further developed a technique that automatically captures an ongoing masquerading attack and demonstrated its efficacy.
机译:今天的虚拟个人助理(VPA)(例如,亚马逊Alexa和Google Assistant)大多依赖于语音渠道与用户通信,然而,已知缺乏适当的身份验证(从用户到VPA)的易受攻击。从VPA服务到用户的新认证挑战已经出现了VPA生态系统的快速增长,这允许第三方发布该服务的函数(称为技能),因此可以利用恶意技能来扩散恶意技能在与亚马逊回声和谷歌家中的智能扬声器互动期间的大量受众。在本文中,我们报告了一项关于此类偏远,大规模攻击的研究确实现实。我们发现了两个新的攻击:语音蹲在哪个对手剥削技能援引技能的方式(例如,“开放资本一'),使用类似明显的名称(例如,`Capital Won'')使用恶意技能或者解释的名称(例如,`'Capital One请'')劫持语音命令意味着合法技能(例如,`'''''')),以及语音伪装,其中恶意技能冒充VPA服务或者用户对话期间的合法技巧与服务窃取她的个人信息。这些攻击以VPAS工作或用户对其功能的误解的方式瞄准,并被发现通过我们的实验(包括用户学习和现实世界部署)在亚马逊回声和谷歌家中构成现实威胁。我们的调查结果的重要性已经受到亚马逊和谷歌的承认,并进一步通过我们建造的新蹲探测器在Alexa和Google Marks上发现的风险技能所证明。我们进一步开发了一种技术,可以自动捕获正在进行的伪装攻击并证明其疗效。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号