首页> 外文会议>Annual International Conference of the IEEE Engineering in Medicine and Biology Society >Optimizing microstimulation using a reinforcement learning framework
【24h】

Optimizing microstimulation using a reinforcement learning framework

机译:使用加强学习框架优化微刺激

获取原文

摘要

The ability to provide sensory feedback is desired to enhance the functionality of neuroprosthetics. Somatosensory feedback provides closed-loop control to the motor system, which is lacking in feedforward neuroprosthetics. In the case of existing somatosensory function, a template of the natural response can be used as a template of desired response elicited by electrical microstimulation. In the case of no initial training data, microstimulation parameters that produce responses close to the template must be selected in an online manner. We propose using reinforcement learning as a framework to balance the exploration of the parameter space and the continued selection of promising parameters for further stimulation. This approach avoids an explicit model of the neural response from stimulation. We explore a preliminary architecture — treating the task as a k-armed bandit — using offline data recorded for natural touch and thalamic microstimulation, and we examine the methods efficiency in exploring the parameter space while concentrating on promising parameter forms. The best matching stimulation parameters, from k = 68 different forms, are selected by the reinforcement learning algorithm consistently after 334 realizations.
机译:期望提供感觉反馈的能力来增强神经调节剂的功能。躯体感觉反馈为电动机系统提供闭环控制,缺乏馈电神经调节剂。在现有躯体感觉函数的情况下,天然反应的模板可以用作通过电微阳刺激引发的所需响应的模板。在没有初始训练数据的情况下,必须以在线方式选择产生靠近模板的响应的微刺激参数。我们建议使用加强学习作为平衡参数空间探索的框架和继续选择有希望参数的进一步刺激。这种方法避免了刺激的神经反应的显式模型。我们探讨了一个初步的架构 - 将任务视为K武装匪徒 - 使用录制的自然触摸和丘陵微刺激的离线数据,我们研究了在专注于有前途参数形式的同时探索参数空间的方法效率。从k = 68种不同的形式中,增强学习算法的最佳匹配刺激参数由钢筋学习算法在334实现之后始终如一地选择。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号