首页> 外文会议>Conference on counterterrorism, crime fighting, forensics, and surveillance technologies >Flexible image analysis for law enforcement agencies with deep neural networks to determine: where, who and what
【24h】

Flexible image analysis for law enforcement agencies with deep neural networks to determine: where, who and what

机译:借助深度神经网络为执法机构提供灵活的图像分析,以确定:在哪里,谁在做什么

获取原文

摘要

Due to the increasing need for effective security measures and the integration of cameras in commercial products, a huge amount of visual data is created today. Law enforcement agencies (LEAs) are inspecting images and videos to find radicalization, propaganda for terrorist organizations and illegal products on darknet markets. This is time consuming. Instead of an undirected search, LEAs would like to adapt to new crimes and threats, and focus only on data from specific locations, persons or objects, which requires flexible interpretation of image content. Visual concept detection with deep convolutional neural networks (CNNs) is a crucial component to understand the image content. This paper has five contributions. The first contribution allows image-based geo-localization to estimate the origin of an image. CNNs and geotagged images are used to create a model that determines the location of an image by its pixel values. The second contribution enables analysis of fine-grained concepts to distinguish sub-categories in a generic concept. The proposed method encompasses data acquisition and cleaning and concept hierarchies. The third contribution is the recognition of person attributes (e.g., glasses or moustache) to enable query by textual description for a person. The person-attribute problem is treated as a specific sub-task of concept classification. The fourth contribution is an intuitive image annotation tool based on active learning. Active learning allows users to define novel concepts flexibly and train CNNs with minimal annotation effort. The fifth contribution increases the flexibility for LEAs in the query definition by using query expansion. Query expansion maps user queries to known and detectable concepts. Therefore, no prior knowledge of the detectable concepts is required for the users. The methods are validated on data with varying locations (popular and non-touristic locations), varying person attributes (CelebA dataset), and varying number of annotations.
机译:由于对有效安全措施的日益增长的需求以及摄像机在商业产品中的集成,今天创建了大量的可视数据。执法机构(LEAs)正在检查图像和视频,以发现激进分子,对恐怖组织的宣传以及在暗网市场上的非法产品。这很费时间。 LEA希望能够适应新的犯罪和威胁,而不是进行无方向的搜索,而只关注来自特定位置,人员或物体的数据,这需要对图像内容进行灵活的解释。深度卷积神经网络(CNN)的视觉概念检测是理解图像内容的关键组成部分。本文有五篇论文。第一贡献是允许基于图像的地理定位来估计图像的来源。 CNN和带有地理标记的图像用于创建一个模型,该模型通过其像素值确定图像的位置。第二个贡献是可以对细粒度的概念进行分析,以区分通用概念中的子类别。所提出的方法包括数据采集和清理以及概念层次结构。第三贡献是识别人的属性(例如,眼镜或小胡子),以使得能够通过文本描述来查询人。人员属性问题被视为概念分类的特定子任务。第四个贡献是基于主动学习的直观图像注释工具。主动学习使用户可以灵活地定义新颖的概念,并以最少的注释工作来训练CNN。第五项贡献是通过使用查询扩展来增加LEA在查询定义中的灵活性。查询扩展将用户查询映射到已知和可检测的概念。因此,用户不需要可检测概念的先验知识。这些方法在具有不同位置(受欢迎和非旅游位置),不同人员属性(CelebA数据集)和不同数量注释的数据上进行了验证。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号