首页> 美国卫生研究院文献>Sensors (Basel Switzerland) >JsrNet: A Joint Sampling–Reconstruction Framework for Distributed Compressive Video Sensing
【2h】

JsrNet: A Joint Sampling–Reconstruction Framework for Distributed Compressive Video Sensing

机译:JsrNet:分布式压缩视频传感的联合采样-重建框架

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Huge video data has posed great challenges on computing power and storage space, triggering the emergence of distributed compressive video sensing (DCVS). Hardware-friendly characteristics of this technique have consolidated its position as one of the most powerful architectures in source-limited scenarios, namely, wireless video sensor networks (WVSNs). Recently, deep convolutional neural networks (DCNNs) are successfully applied in DCVS because traditional optimization-based methods are computationally elaborate and hard to meet the requirements of real-time applications. In this paper, we propose a joint sampling–reconstruction framework for DCVS, named “JsrNet”. JsrNet utilizes the whole group of frames as the reference to reconstruct each frame, regardless of key frames and non-key frames, while the existing frameworks only utilize key frames as the reference to reconstruct non-key frames. Moreover, different from the existing frameworks which only focus on exploiting complementary information between frames in joint reconstruction, JsrNet also applies this conception in joint sampling by adopting learnable convolutions to sample multiple frames jointly and simultaneously in an encoder. JsrNet fully exploits spatial–temporal correlation in both sampling and reconstruction, and achieves a competitive performance in both the quality of reconstruction and computational complexity, making it a promising candidate in source-limited, real-time scenarios.
机译:巨大的视频数据对计算能力和存储空间提出了巨大挑战,引发了分布式压缩视频感测(DCVS)的出现。该技术的硬件友好特性巩固了其在源受限场景中最强大的体系结构之一的地位,即无线视频传感器网络(WVSN)。近年来,深卷积神经网络(DCNN)已成功应用于DCVS,因为传统的基于优化的方法计算复杂且难以满足实时应用的要求。在本文中,我们为DCVS提出了一个联合采样重建框架,名为“ JsrNet”。 JsrNet利用整个帧组作为参考来重建每个帧,而与关键帧和非关键帧无关,而现有框架仅利用关键帧作为参考来重建非关键帧。此外,与仅关注联合重建中帧之间的互补信息的现有框架不同,JsrNet还通过​​采用可学习的卷积在编码器中联合并同时对多个帧进行采样,将这一概念应用于联合采样。 JsrNet在采样和重建中充分利用了时空相关性,并且在重建质量和计算复杂性方面均具有竞争优势,使其成为在源受限的实时场景中很有希望的候选者。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号