首页> 外文会议>IEEE Midwest Industry Conference >Deployment of SE-SqueezeNext on NXP BlueBox 2.0 and NXP i.MX RT1060 MCU
【24h】

Deployment of SE-SqueezeNext on NXP BlueBox 2.0 and NXP i.MX RT1060 MCU

机译:在NXP BlueBox 2.0和NXP i.MX RT1060 MCU上部署SE-SqueezeNext

获取原文

摘要

Convolution neural system is being utilized in field of self-governing driving vehicles or driver assistance systems (ADAS), and has made extraordinary progress. Before the CNN, conventional AI calculations helped ADAS. Right now, there is an incredible investigation being done in DNNs like MobileNet, SqueezeNext & SqueezeNet. It improved the CNN designs and made it increasingly appropriate to actualize on real-time embedded systems. Due to the model size complexity of many models, they cannot be deployed straight away on real-time systems. The most important requirement will be to have less model size without a tradeoff with accuracy. Squeeze-and-Excitation SqueezeNext which is an efficient DNN with best model accuracy of 92.60% and with least model size of 0.595MB is chosen to be deployed on NXP BlueBox 2.0 and NXP i.MX RT1060. This deployment is very successful because of its less size and better accuracy. The model is trained and validated on CIFAR-10 dataset.
机译:卷积神经系统正用于自动驾驶汽车或驾驶员辅助系统(ADAS)领域,并取得了非凡的进步。在CNN之前,传统的AI计算有助于ADAS。目前,在诸如MobileNet,SqueezeNext和SqueezeNet之类的DNN中正在进行令人难以置信的调查。它改进了CNN设计,使其越来越适合在实时嵌入式系统上实现。由于许多模型的模型大小复杂,因此无法立即将其部署在实时系统上。最重要的要求将是减小模型尺寸,而又不进行精度权衡。挤压和激励SqueezeNext是一种高效的DNN,具有92.60%的最佳模型精度和0.595MB的最小模型大小,被选择部署在NXP BlueBox 2.0和NXP i.MX RT1060上。这种部署非常成功,因为它具有更小的尺寸和更好的准确性。该模型在CIFAR-10数据集上进行了训练和验证。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号