...
首页> 外文期刊>The Journal of Artificial Intelligence Research >Planning with Durative Actions in Stochastic Domains
【24h】

Planning with Durative Actions in Stochastic Domains

机译:随机域中具有持续性动作的计划

获取原文
获取原文并翻译 | 示例
           

摘要

Probabilistic planning problems are typically modeled as a Markov Decision Process (MDP). MDPs, while an otherwise expressive model, allow only for sequential, non-durative actions. This poses severe restrictions in modeling and solving a real world planning problem. We extend the MDP model to incorporate-1) simultaneous action execution, 2) durative actions, and 3) stochastic durations. We develop several algorithms to combat the computational explosion introduced by these features. The key theoretical ideas used in building these algorithms are - modeling a complex problem as an MDP in extended state/action space, pruning of irrelevant actions, sampling of relevant actions, using informed heuristics to guide the search, hybridizing different planners to achieve benefits of both, approximating the problem and replanning. Our empirical evaluation illuminates the different merits in using various algorithms, viz., optimality, empirical closeness to optimality, theoretical error bounds, and speed.
机译:概率规划问题通常以马尔可夫决策过程(MDP)建模。 MDP是一种表达模型,但仅允许顺序的非持续性操作。这在建模和解决现实世界的规划问题方面提出了严格的限制。我们扩展了MDP模型,以合并1)同时执行动作,2)持续操作和3)随机持续时间。我们开发了几种算法来应对这些功能带来的计算爆炸。建立这些算法的关键理论思想是-将复杂的问题建模为扩展状态/动作空间中的MDP,修剪不相关的动作,对相关动作进行采样,使用明智的启发式方法来指导搜索,将不同的计划者混合使用,以实现两者都可以解决问题并重新计划。我们的经验评估阐明了使用各种算法的优缺点,即最优性,与最优性的经验接近性,理论误差范围和速度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号