admin管理员组文章数量:1130349
原文:https://deepmind/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii
SL = supervised learning, RL = reinforcement learning
- how AlphaStar is trained
units, properties -> DNN -> instructions
DNN: transform torso(relational deep RL), deep LSTM core, auto-regressive policy head with pointer network, centralised value baseline
train: SL -> mico/macro strategies
compete -> hyper parameters updated by RL -> Nash distribution -> final agent
multi-agent RL: play against each other: population-based, multi-agent RL -> huge strategic space -> defeat strongest and eariler ones
explore new build orders, unit compositions, micro-management plans
personal objective: beat specific competitor/beat distribution of competitors/building more of specific unit
NN weights: off-policy actor-critic RL with experience replay, self-imitation learning, policy distillation
run on TPUs, final agent: Nash distribution of the league: best mixture of strategies
- how AlphaStar plays and how to evaluate
TLO/MaNa ~ 100 APM
agent ~ 1000, 10000 APM
AlphaStar vs. TLO/MaNa ~280 APM (read screen frames use raw interface)
AlphaStar act: observation -> action: 350ms/avg, process every frame
results: 5:0
转载于:https://wwwblogs/yaoyaohust/p/10815039.html
原文:https://deepmind/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii
SL = supervised learning, RL = reinforcement learning
- how AlphaStar is trained
units, properties -> DNN -> instructions
DNN: transform torso(relational deep RL), deep LSTM core, auto-regressive policy head with pointer network, centralised value baseline
train: SL -> mico/macro strategies
compete -> hyper parameters updated by RL -> Nash distribution -> final agent
multi-agent RL: play against each other: population-based, multi-agent RL -> huge strategic space -> defeat strongest and eariler ones
explore new build orders, unit compositions, micro-management plans
personal objective: beat specific competitor/beat distribution of competitors/building more of specific unit
NN weights: off-policy actor-critic RL with experience replay, self-imitation learning, policy distillation
run on TPUs, final agent: Nash distribution of the league: best mixture of strategies
- how AlphaStar plays and how to evaluate
TLO/MaNa ~ 100 APM
agent ~ 1000, 10000 APM
AlphaStar vs. TLO/MaNa ~280 APM (read screen frames use raw interface)
AlphaStar act: observation -> action: 350ms/avg, process every frame
results: 5:0
转载于:https://wwwblogs/yaoyaohust/p/10815039.html
本文标签: 博客RealTimeAlphaStarMastering
版权声明:本文标题:AlphaStar: Mastering the Real-Time Strategy Game StarCraft II 博客阅读 内容由热心网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:https://it.en369.cn/jiaocheng/1754916713a2741357.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。


发表评论