首页> 外文会议>International Conference on Automated Planning and Scheduling(ICAPS 2006); 2006; >Combining Stochastic Task Models with Reinforcement Learning for Dynamic Scheduling
【24h】

Combining Stochastic Task Models with Reinforcement Learning for Dynamic Scheduling

机译:将随机任务模型与强化学习相结合进行动态调度

获取原文
获取原文并翻译 | 示例

摘要

We view dynamic scheduling as a sequential decision problem. Firstly, we introduce a generalized planning operator, the stochastic task model (STM), which predicts the effects of executing a particular task on state, time and reward using a general procedural format (pure stochastic function). Secondly, we show that effective planning under uncertainty can be obtained by combining adaptive horizon stochastic planning with reinforcement learning (RL) in a hybrid system. The benefits of the hybrid approach are evaluated using a repeatable job shop scheduling task.
机译:我们将动态调度视为顺序决策问题。首先,我们引入一种广义的计划算子,即随机任务模型(STM),它使用通用的程序格式(纯随机函数)预测执行特定任务对状态,时间和奖励的影响。其次,我们表明,在混合系统中,通过将自适应水平随机计划与强化学习(RL)相结合,可以获得不确定性下的有效计划。使用可重复的作业车间调度任务评估混合方法的好处。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号