首页> 外文期刊>Progress in nuclear engergy >Scaling deep learning for whole-core reactor simulation
【24h】

Scaling deep learning for whole-core reactor simulation

机译:Scaling deep learning for whole-core reactor simulation

获取原文
获取原文并翻译 | 示例
           

摘要

A deep learning architecture for predicting the normalized pin powers within 2D pressurized water reactors, called LatticeNet, has been developed and shown to be performant for a variety of relevant conditions within a single 2D reflective assembly. However, many neutronics scenarios of interest involve regions composed of multiple assemblies, up to and including full-core scenarios. It is not immediately obvious that scaling LatticeNet up to these full-core scenarios will achieve the same performance as seen in single-assembly scenarios, due to the problem-tailored nature of neural networks. It is also simple to show that the original implementation of LatticeNet does not easily scale up to multi-assembly regions due to the enormous compute demands of the original proposed architecture. In this work, we address these issues by first proposing several variants of LatticeNet which address the issue of scaling compute needs, and show the theoretical performance benefits gained from these architectures. We then evaluate the actual benefit of the proposed variants on multi-assembly regions containing roughly the same variation outlined in the original paper proposing LatticeNet. We show that the proposed architecture changes do not result in significantly increased error, and that these changes result in much more manageable training times relative to the original LatticeNet architecture.

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号