强化学习 Reinforcement Learning(三)—— 是时候用 PARL 框架玩会儿 DOOM 了!!!(下)

本文目录

  • 强化学习 Reinforcement Learning(三)—— 是时候用 PARL 框架玩会儿 DOOM 了!!!(下)
    • 训练代码
    • 效果展示
    • 仍需改进的地方

呼~~~终于到代码部分了

废话不多说,直接上代码

训练代码

导入库与支持

# -*- coding:utf-8 -*-
from vizdoom import *
import random
import copy
import numpy as np
import parl
from parl import layers
import paddle.fluid as fluid
from parl.utils import logger
import collections

一些超参数与路径参数的设置

# initialize our doomgame environment
game = DoomGame()# 指定场景文件的存放路径
# game.set_doom_scenario_path(DOOM_PATH)
game.load_config("/usr/local/lib/python3.6/dist-packages/vizdoom/scenarios/basic.cfg")
# 指定地图文件的路径
# game.set_doom_map(MAP_PATH)# 设置屏幕分辨率和屏幕的格式
game.set_screen_resolution(ScreenResolution.RES_256X160)
game.set_screen_format(ScreenFormat.GRAY8)# 通过简单的设置True或者False来添加所需的粒子和效果
game.set_render_hud(False)
game.set_render_minimal_hud(False)
game.set_render_crosshair(True)
game.set_render_weapon(True)
game.set_render_decals(True)
game.set_render_particles(True)
game.set_render_effects_sprites(True)
game.set_render_messages(True)
game.set_render_corpses(True)
game.set_render_screen_flashes(True)# 设置智能体可用的按钮
game.add_available_button(Button.MOVE_LEFT)
game.add_available_button(Button.MOVE_RIGHT)
game.add_available_button(Button.ATTACK)# 初始化行为数组
actions = np.zeros((game.get_available_buttons_size(), game.get_available_buttons_size()))
count = 0
for i in actions:i[count] = 1count += 1
actions = actions.astype(int).tolist()# 添加游戏变量:弹药、生命力和杀死怪兽个数
game.add_available_game_variable(GameVariable.AMMO0)
game.add_available_game_variable(GameVariable.DAMAGECOUNT)
game.add_available_game_variable(GameVariable.HITCOUNT)# 设置 episode_timeout ,在经过一些时间步之后终止情景。
# 另外,还设置 episode_start_time ,这对于省略初始事件非常有用
game.set_episode_timeout(6 * 200)
game.set_episode_start_time(10)
game.set_sound_enabled(False)# 设存活奖励为0
game.set_living_reward(-10)# doom有效具有不同模式,如玩家、观众、非同步玩家、非同步观众
# 在玩家模式下,智能体将真正玩游戏,因此,在此采用玩家模式
game.set_mode(Mode.PLAYER)# initialize the game environment
game.init()

构建模型,主要是一个 CNN + 假装的 RNN(雾)

# Model
class Model(parl.Model):def __init__(self, num_actions):# define the hyperparameters of the CNN# filter sizeself.filter_size = 5# number of filtersself.num_filters = [16, 32, 64]# stride sizeself.stride = 2# pool sizeself.poolsize = 2self.vocab_size = 4000self.emb_dim = 256# drop out probabilityself.dropout_probability = [0.3, 0.2]self.act_dim = num_actionsdef value(self, obs):# first convolutional layerself.conv1 = fluid.layers.conv2d(obs,num_filters = self.num_filters[0],filter_size = self.filter_size,stride = self.stride,act='relu')self.pool1 = fluid.layers.pool2d(self.conv1,pool_size = self.poolsize,pool_type = "max",pool_stride = self.stride)# second convolutional layerself.conv2 = fluid.layers.conv2d_transpose(self.pool1,num_filters = self.num_filters[1],filter_size = self.filter_size,stride = self.stride,act='relu')self.pool2 = fluid.layers.pool2d(self.conv2,pool_size = self.poolsize,pool_type = "max",pool_stride = self.stride)# third convolutional layerself.conv3 = fluid.layers.conv2d(self.pool2,num_filters = self.num_filters[1],filter_size = self.filter_size,stride = self.stride,act='relu')self.pool3 = fluid.layers.pool2d(self.conv3,pool_size = self.poolsize,pool_type = "max",pool_stride = self.stride)# add dropout and reshape the inputself.fc1 = fluid.layers.fc(self.pool3, size = 512, act="relu")self.drop1 = fluid.layers.dropout(self.fc1, dropout_prob = self.dropout_probability[0])self.fc2 = fluid.layers.fc(self.drop1, size = 512, act="relu")# build RNN(fake)#self.input = fluid.layers.concat(input = [self.fc2, self.c], axis = 0)self.tanh = fluid.layers.tanh(self.fc2)self.o = fluid.layers.softmax(self.tanh)self.drop2 = fluid.layers.dropout(self.o, dropout_prob = self.dropout_probability[1])self.prediction = fluid.layers.fc(self.drop2, size = self.act_dim)return self.prediction

DQN 算法部分

# Algorithm
class DQN(parl.Algorithm):def __init__(self, model, act_dim=None, gamma=None,  lr=None):"""Args:model (parl.Model): 定义Q函数的前向网络结构act_dim (int): action空间的维度,即有几个actiongamma (float): reward的衰减因子lr (float): learning rate 学习率."""self.model = modelself.target_model = copy.deepcopy(model)assert isinstance(act_dim, int)assert isinstance(gamma, float)assert isinstance(lr, float)self.act_dim = act_dimself.gamma = gammaself.lr = lrdef predict(self, obs):"""使用self.model的value网络来获取 [Q(s,a1),Q(s,a2),...]"""return self.model.value(obs)def learn(self, obs, action, reward, next_obs, terminal):"""使用DQN算法更新self.model的value网络"""# 从target_model中获取 max Q' 的值,用于计算target_Qnext_pred_value = self.target_model.value(next_obs)best_v = layers.reduce_max(next_pred_value, dim=1)best_v.stop_gradient = True  # 阻止梯度传递terminal = layers.cast(terminal, dtype='float32')target = reward + (1.0 - terminal) * self.gamma * best_vpred_value = self.model.value(obs)  # 获取Q预测值# 将action转onehot向量,比如:3 => [0,0,0,1,0]action_onehot = layers.one_hot(action, self.act_dim)action_onehot = layers.cast(action_onehot, dtype='float32')# 下面一行是逐元素相乘,拿到action对应的 Q(s,a)# 比如:pred_value = [[2.3, 5.7, 1.2, 3.9, 1.4]], action_onehot = [[0,0,0,1,0]]#  ==> pred_action_value = [[3.9]]pred_action_value = layers.reduce_sum(layers.elementwise_mul(action_onehot, pred_value), dim=1)# 计算 Q(s,a) 与 target_Q的均方差,得到losscost = layers.square_error_cost(pred_action_value, target)cost = layers.reduce_mean(cost)optimizer = fluid.optimizer.Adam(learning_rate=self.lr)  # 使用Adam优化器optimizer.minimize(cost)return costdef sync_target(self):"""把 self.model 的模型参数值同步到 self.target_model"""self.model.sync_weights_to(self.target_model)

定义智能体与经验回放部分

# Agent
class Agent(parl.Agent):def __init__(self,algorithm,obs_dim,act_dim,e_greed=0.1,e_greed_decrement=0):assert isinstance(obs_dim, list)assert isinstance(act_dim, int)self.obs_dim = obs_dimself.act_dim = act_dimsuper(Agent, self).__init__(algorithm)self.global_step = 0self.update_target_steps = 200  # 每隔200个training steps再把model的参数复制到target_model中self.e_greed = e_greed  # 有一定概率随机选取动作,探索self.e_greed_decrement = e_greed_decrement  # 随着训练逐步收敛,探索的程度慢慢降低def build_program(self):self.pred_program = fluid.Program()self.learn_program = fluid.Program()with fluid.program_guard(self.pred_program):  # 搭建计算图用于 预测动作,定义输入输出变量obs = layers.data(name='obs', shape=self.obs_dim, dtype='float32')self.value = self.alg.predict(obs)with fluid.program_guard(self.learn_program):  # 搭建计算图用于 更新Q网络,定义输入输出变量obs = layers.data(name='obs', shape=self.obs_dim, dtype='float32')action = layers.data(name='act', shape=[1], dtype='int32')reward = layers.data(name='reward', shape=[], dtype='float32')next_obs = layers.data(name='next_obs', shape=self.obs_dim, dtype='float32')terminal = layers.data(name='terminal', shape=[], dtype='bool')self.cost = self.alg.learn(obs, action, reward, next_obs, terminal)def sample(self, obs):sample = np.random.rand()  # 产生0~1之间的小数if sample < self.e_greed:act = np.random.randint(self.act_dim)  # 探索:每个动作都有概率被选择else:act = self.predict(obs)  # 选择最优动作self.e_greed = max(0.01, self.e_greed - self.e_greed_decrement)  # 随着训练逐步收敛,探索的程度慢慢降低return actdef predict(self, obs):  # 选择最优动作obs = np.array(obs)if(obs.shape == (3, 160, 256)):#CHWpasselif(obs.shape == (256, 3, 160)):obs = obs.transpose(1, 2, 0) #NCHWelif(obs.shape == (160, 256, 3)):obs = obs.transpose(2, 0, 1) #NCHWobs = np.expand_dims(obs, axis=0)pred_Q = self.fluid_executor.run(self.pred_program,feed={'obs': obs.astype('float32')},fetch_list=[self.value])[0]pred_Q = np.squeeze(pred_Q, axis=0)act = np.argmax(pred_Q)  # 选择Q最大的下标,即对应的动作return actdef learn(self, obs, act, reward, next_obs, terminal):# 每隔200个training steps同步一次model和target_model的参数if self.global_step % self.update_target_steps == 0:self.alg.sync_target()self.global_step += 1obs = np.array(obs)if(obs.shape == (3, 160, 256)):#CHWpasselif(obs.shape == (256, 3, 160)):obs = obs.transpose(1, 2, 0) #NCHWelif(obs.shape == (160, 256, 3)):obs = obs.transpose(2, 0, 1) #NCHWobs = np.expand_dims(obs, axis=0)next_obs = np.array(next_obs)if(next_obs.shape == (3, 160, 256)):#CHWpasselif(next_obs.shape == (256, 3, 160)):next_obs = next_obs.transpose(1, 2, 0) #NCHWelif(next_obs.shape == (160, 256, 3)):next_obs = next_obs.transpose(2, 0, 1) #NCHWnext_obs = np.expand_dims(next_obs, axis=0)act = np.expand_dims(act, -1)feed = {'obs': obs.astype('float32'),'act': act.astype('int32'),'reward': reward,'next_obs': next_obs.astype('float32'),'terminal': terminal}cost = self.fluid_executor.run(self.learn_program, feed=feed, fetch_list=[self.cost])[0]  # 训练一次网络return costclass ReplayMemory(object):def __init__(self, max_size):self.buffer = collections.deque(maxlen=max_size)# 增加一条经验到经验池中def append(self, exp):self.buffer.append(exp)# 从经验池中选取N条经验出来def sample(self, batch_size):mini_batch = random.sample(self.buffer, batch_size)obs_batch, action_batch, reward_batch, next_obs_batch, done_batch = [], [], [], [], []for experience in mini_batch:s, a, r, s_p, done = experienceobs_batch.append(s)action_batch.append(a)reward_batch.append(r)next_obs_batch.append(s_p)done_batch.append(done)return np.array(obs_batch[0]).astype('float32'), \np.array(action_batch).astype('float32'), np.array(reward_batch).astype('float32'),\np.array(next_obs_batch[0]).astype('float32'), np.array(done_batch).astype('float32')def __len__(self):return len(self.buffer)

小技巧,通过将三帧图像合成再输入 CNN 让模型可以获取到运动信息


def frame(state):n = Truewhile True:if n:frame0 = state.screen_bufferstate = game.get_state()frame1 = state.screen_bufferstate = game.get_state()frame2 = state.screen_bufferelse:frame0 = frame1frame1 = frame2frame2 = state.screen_bufferobs = np.dstack([frame0, frame1, frame2])n = Falseyield obs

训练函数与评估函数

def run_episode(game, agent, rpm):game.new_episode()logger.info('start new episode')state = game.get_state()frame_gen = frame(state)obs = next(frame_gen)step = 0while not game.is_episode_finished():state = game.get_state()step += 1action = agent.sample(obs)  # 采样动作,所有动作都有概率被尝试到reward = game.make_action(actions[action])next_obs = next(frame_gen)next_obs = np.array(next_obs)if(next_obs.shape == (3, 160, 256)):#CHWpasselif(next_obs.shape == (256, 3, 160)):next_obs = next_obs.transpose(1, 2, 0) #NCHWelif(next_obs.shape == (160, 256, 3)):next_obs = next_obs.transpose(2, 0, 1) #NCHWdone = game.is_episode_finished()rpm.append((obs, action, reward, next_obs, done))# train modelif (len(rpm) > MEMORY_WARMUP_SIZE) and (step % LEARN_FREQ == 0):(batch_obs, batch_action, batch_reward, batch_next_obs,batch_done) = rpm.sample(BATCH_SIZE)train_loss = agent.learn(batch_obs, batch_action, batch_reward,batch_next_obs,batch_done)  # s,a,r,s',doneobs = next_obstotal_reward = game.get_total_reward()print('training reward:'+str(total_reward))return total_reward# 评估 agent, 跑 5 个episode,总reward求平均
def evaluate(game, agent, render=False):eval_reward = []for i in range(5):game.new_episode()episode_reward = 0while not game.is_episode_finished():state = game.get_state()frame_gen = frame(state)obs = next(frame_gen)action = agent.predict(obs)  # 预测动作,只选最优动作reward = game.make_action(actions[action])done = not game.is_episode_finished()misc = state.game_variablesif render:game.set_window_visible(render)episode_reward = game.get_total_reward()print('evaluate reward:'+str(episode_reward))eval_reward.append(episode_reward)return np.mean(eval_reward)

训练主体部分

action_dim = game.get_available_buttons_size()
obs_shape = [-1, 3, 160, 256]
rpm = ReplayMemory(MEMORY_SIZE)# 根据parl框架构建agent
model = Model(num_actions = action_dim)
algorithm = DQN(model, act_dim = action_dim, gamma=GAMMA, lr=LEARNING_RATE)
agent = Agent(algorithm,obs_dim=obs_shape,act_dim=action_dim,e_greed=0.15,  # 有一定概率随机选取动作,探索e_greed_decrement=1e-5)  # 随着训练逐步收敛,探索的程度慢慢降reward = RewardBuilder()# 加载模型
# save_path = './DRQN_DOOM.ckpt'
# agent.restore(save_path)# 先往经验池里存一些数据,避免最开始训练的时候样本丰富度不够
while len(rpm) < MEMORY_WARMUP_SIZE:run_episode(game, agent, rpm)# 开始训练
episode = 0
while episode < max_episode:  # 训练max_episode个回合,test部分不计算入episode数量# train partfor i in range(0, 5):total_reward = run_episode(game, agent, rpm)episode += 1logger.info('episode:{}    e_greed:{}  '.format(episode, agent.e_greed))# test parteval_reward = evaluate(game, agent, render=False)  # render=True 查看显示效果logger.info('episode:{}    e_greed:{}   test_reward:{}'.format(episode, agent.e_greed, eval_reward))# 训练结束,保存模型
save_path = './DQN_DOOM.ckpt'
agent.save(save_path)
game.close()

效果展示


仍需改进的地方

  1. RNN 是个假的,主要是如果加上 LSTM 等结构,网络进行参数复制的时候会报错
  2. 奖励函数设计过于粗糙,很有可能会是一个及其稀疏的奖励,收敛性是个大问题
  3. 运行设施没有 GPU ,模型是用纯 CPU 上网本跑的,参数设置的非常保守

本人以后会发布一些关于机器学习模型算法,自动控制算法的其他文章,也会聊一聊自己做的一些小项目,希望读者朋友们能够喜欢。

强化学习 Reinforcement Learning(三)——是时候用 PARL 框架玩会儿 DOOM 了!!!(下)相关推荐

  1. 强化学习(Reinforcement Learning)入门学习--01

    强化学习(Reinforcement Learning)入门学习–01 定义 Reinforcement learning (RL) is an area of machine learning in ...

  2. 强化学习 (Reinforcement Learning)

    强化学习: 强化学习是机器学习中的一个领域,强调如何基于环境而行动,以取得最大化的预期利益.其灵感来源于心理学中的行为主义理论,即有机体如何在环境给予的奖励或惩罚的刺激下,逐步形成对刺激的预期,产生能 ...

  3. 强化学习(Reinforcement Learning)入门知识

    强化学习(Reinforcement Learning) 概率统计知识 1. 随机变量和观测值 抛硬币是一个随机事件,其结果为**随机变量 X ** 正面为1,反面为0,若第 i 次试验中为正面,则观 ...

  4. ​李宏毅机器学习——强化学习Reinforcement Learning

    目录 应用场景 强化学习的本质 以电脑游戏为例 强化学习三个步骤 第一步:有未知参数的函数 第二步:定义Loss 第三步:Optimization RL的难点 类比GAN Policy Gradien ...

  5. 强化学习Reinforcement Learning

    Abstract Abstract 背景 强化学习算法概念 背景 (1) 强化学习的历史发展 1956年Bellman提出了动态规划方法. 1977年Werbos提出只适应动态规划算法. 1988年s ...

  6. 强化学习Reinforcement Learning概念理解篇(一)

    在学习强化学习之前,应该对强化学习有一个大致的了解,即去分析一下强化学习的结构或者组成元素: 什么是强化学习?所谓强化学习,就是在与环境的互动当中,为了达到某一个目标而精心的学习过程,因此称之为Goa ...

  7. 强化学习(Reinforcement Learning)

    背景 当我们思考学习的本质时,我们首先想到的可能是我们通过与环境的互动来学习.无论是在学习开车还是在交谈,我们都清楚地意识到环境是如何回应我们的行为的,我们试图通过行为来影响后续发生的事情.从互动中学 ...

  8. 强化学习(Reinforcement Learning)中的Q-Learning、DQN,面试看这篇就够了!

    文章目录 1. 什么是强化学习 2. 强化学习模型 2.1 打折的未来奖励 2.2 Q-Learning算法 2.3 Deep Q Learning(DQN) 2.3.1 神经网络的作用 2.3.2 ...

  9. 强化学习 (Reinforcement Learning) 基础及论文资料汇总

    持续更新中... 书籍 1. <Reinforcement Learning: An Introduction>Richard S. Sutton and Andrew G.Barto , ...

最新文章

  1. 「2017 山东一轮集训 Day5」距离
  2. python真假命题_程序员冒死揭开家暴内幕:教女友学Python是道送命题!
  3. stl中unordered_map 和 map的区别 ?
  4. 运行第一个docker容器
  5. easyui数据请求两个url_easyui使用是调用两次后台请求(解决)
  6. OAM创始团队:揭秘OAMKubernetes实现核心原理
  7. 称新手机是“二手货” 消费者起诉要求三倍赔偿
  8. 为了摸清敌人对自己了解多少,高阶国家黑客组织Turla 决定偷走反病毒日志
  9. mysql数据库 integer_MySQL数据库中,常用的数据类型
  10. 阿里云如何打破Oracle迁移上云的壁垒
  11. java 加密使长度变短
  12. VCPKG 包下载失败解决思路
  13. 安装计算机的显卡出现问题,电脑显卡驱动安装失败如何解决
  14. vue项目首屏加载过慢解决方案
  15. 第六章第十三题(数列求和)(Sum series)
  16. 从头开始学51单片机之4:C51程序设计基础
  17. 宏观调控绝不是微观控制
  18. SpringBoot 日志(学习笔记13)
  19. 高等数学-求导基本公式
  20. 广东工业大学计算机实验室有哪些,广东工业大学重点实验室、研究中心

热门文章

  1. java中流_Java中流的有关知识点详解
  2. 程序猿小哥用12万行代码堆出来个「蔡徐坤」!编译竟然还能通过
  3. 【无标题】惠普ZHAN 66 PRO 14 G3 NOTEBOOK PC笔记本电脑装好系统没有触摸板驱动
  4. 配置新系统(Win7 x64)
  5. 信息学奥赛对大学计算机专业,区别大盘点:信息学竞赛、信息学奥赛、NOI和IOI傻傻分不清楚...
  6. Hadoop数据采集方案
  7. java计算机毕业设计-高中辅助教学系统-源程序+mysql+系统+lw文档+远程调试
  8. 监控电脑屏幕python
  9. word不能粘贴文字问题
  10. ABAQUS均布载荷的悬臂梁静力学计算