超参数设置参考:https://github.com/ranjitation/DQN-for-LunarLander/blob/master/dqn_agent.py

之前CartPole照着Deeplizard的教程做给做废了,于是换了OpenAI - Gym另外一个小游戏LunarLander,尝试自己从零实现DQN。

官方文档的描述如下:

Landing pad is always at coordinates (0,0). Coordinates are the first two numbers in state vector. Reward for moving from the top of the screen to landing pad and zero speed is about 100..140 points. If lander moves away from landing pad it loses reward back. Episode finishes if the lander crashes or comes to rest, receiving additional -100 or +100 points. Each leg ground contact is +10. Firing main engine is -0.3 points each frame. Solved is 200 points. Landing outside landing pad is possible. Fuel is infinite, so an agent can learn to fly and then land on its first attempt. Four discrete actions available: do nothing, fire left orientation engine, fire main engine, fire right orientation engine.

目标是让着陆器平稳降落在landing pad上,每个状态是一个8维向量,包括水平坐标x,垂直坐标y,水平速度,垂直速度,角度,角速度,腿1是否触地,腿2是否触地。一共4种动作,包括什么都不做,左引擎点火,右引擎点火,主引擎点火。

算法流程大致如下:

'''
训练时,外层for枚举当前回合。每回合,初始化环境和初始state,然后枚举当前回合的时间戳。对于每个单位时间,按照ε-greedy策略选定一个动作a,采取动作a并得到一组经验(s, a, r, s')。然后把当前状态更新为s'(没写导致多调了一个下午...),并把这组经验放进buffer。(每过一定单位时间)从buffer中sample出一个batch的经验用来optimize目前的policy_net,之后将policy_net的参数赋给target_net(之前写成赋给policy_net自己导致多改了一天bug....)。当前回合的total_reward加上这一个单位时间的reward。(为了加快训练,时间戳超过1000直接进入下一回合,防止着陆器长时间悬停在空中)进行一次epsilon decay。测试时直接把epsilon设成0(只exploit不再explore)然后多跑一些回合算下total_reward平均值,超过200即可。
'''

总结一下还有两个小疑点--

1. 似乎对不同的case,训练policy_net,更新target_net以及epsilon decay的频率都不太一样?

2. 随机种子是真的玄学......

代码:

import random
import gym
import numpy as np
import matplotlib.pyplot as plt
from itertools import countimport torchtorch.cuda.current_device()
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as FBUFFER_SIZE = 100000
BATCH_SIZE = 64
GAMMA = 0.99  # discount factor
LR = 5e-4
UPDATE_PERIOD = 4
EPS_ED = 0.01
EPS_DECAY = 0.99
SLIDE_LEN = 20
MAX_TIME = 1000device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')env = gym.make('LunarLander-v2')
env.seed(0)
random.seed(0)class Net(nn.Module):def __init__(self, h1=128, h2=64):super(Net, self).__init__()self.seed = torch.manual_seed(0)self.fc1 = nn.Linear(8, h1)self.fc2 = nn.Linear(h1, h2)self.fc3 = nn.Linear(h2, 4)def forward(self, t):t = F.relu(self.fc1(t))t = F.relu(self.fc2(t))t = self.fc3(t)return tclass Experience:def __init__(self, cur_state, action, reward, nxt_state, done):self.cur_state = cur_stateself.action = actionself.reward = rewardself.nxt_state = nxt_stateself.done = doneclass Buffer:def __init__(self):# random.seed(0)self.n = BUFFER_SIZEself.memory = [None for _ in range(BUFFER_SIZE)]self.pt = 0self.flag = 0  # to indicate whether the buffer can provide a batch of datadef push(self, experience):self.memory[self.pt] = experienceself.pt = (self.pt + 1) % self.nself.flag = min(self.flag + 1, self.n)def sample(self, sample_size):return random.sample(self.memory[:self.flag], sample_size)class Agent:def __init__(self):# random.seed(0)self.eps = 1.0self.buff = Buffer()self.policy_net = Net()self.target_net = Net()self.optim = optim.Adam(self.policy_net.parameters(), lr=LR)self.update_networks()self.total_rewards = []self.avg_rewards = []def update_networks(self):self.target_net.load_state_dict(self.policy_net.state_dict())def update_experiences(self, cur_state, action, reward, nxt_state, done):experience = Experience(cur_state, action, reward, nxt_state, done)self.buff.push(experience)def sample_experiences(self):samples = self.buff.sample(BATCH_SIZE)for _, ele in enumerate(samples):if _ == 0:cur_states = ele.cur_state.unsqueeze(0)actions = ele.actionrewards = ele.rewardnxt_states = ele.nxt_state.unsqueeze(0)dones = ele.doneelse:cur_states = torch.cat((cur_states, ele.cur_state.unsqueeze(0)), dim=0)actions = torch.cat((actions, ele.action), dim=0)rewards = torch.cat((rewards, ele.reward), dim=0)nxt_states = torch.cat((nxt_states, ele.nxt_state.unsqueeze(0)), dim=0)dones = torch.cat((dones, ele.done), dim=0)return cur_states, actions, rewards, nxt_states, donesdef get_action(self, state):rnd = random.random()if rnd > self.eps:  # exploitvalues = self.policy_net(state)act = torch.argmax(values, dim=0).item()else:act = random.randint(0, 3)return actdef optimize_policy(self):criterion = nn.MSELoss()cur_states, actions, rewards, nxt_states, dones = self.sample_experiences()cur_states = cur_states.to(device).float()actions = actions.to(device).long()rewards = rewards.to(device).float()nxt_states = nxt_states.to(device).float()dones = dones.to(device)self.policy_net = self.policy_net.to(device)self.target_net = self.target_net.to(device)# for i in range(10):policy_values = torch.gather(self.policy_net(cur_states), dim=1, index=actions.unsqueeze(-1))with torch.no_grad():next_values = torch.max(self.target_net(nxt_states), dim=1)[0]target_values = rewards + GAMMA * next_values * (1 - dones)target_values = target_values.unsqueeze(1)self.optim.zero_grad()loss = criterion(policy_values, target_values)loss.backward()# print("Loss:", loss.item())self.optim.step()self.policy_net = self.policy_net.cpu()self.target_net = self.target_net.cpu()return loss.item()def train(self, episodes):for episode in range(episodes):total_reward = 0cur_state = env.reset()cur_state = torch.from_numpy(cur_state)for tim in count():action = self.get_action(cur_state)# img = env.render(mode='rgb_array')nxt_state, reward, done, _ = env.step(action)nxt_state = torch.from_numpy(nxt_state)action = torch.tensor(action).unsqueeze(-1)reward = torch.tensor(reward).unsqueeze(-1)done = torch.tensor(1 if done else 0).unsqueeze(-1)self.buff.push(Experience(cur_state, action, reward, nxt_state, done))cur_state = nxt_state  # !!!if self.buff.flag >= BATCH_SIZE and self.buff.pt % UPDATE_PERIOD == 0:self.update_networks()self.optimize_policy()total_reward += reward.item()if done or tim >= MAX_TIME:self.update_rewards(total_reward)breakself.plot_rewards()if self.eps > EPS_ED:self.eps *= EPS_DECAYtorch.save(self.policy_net.state_dict(), 'policy_net.pkl')def update_rewards(self, total_reward):self.total_rewards.append(total_reward)cur = len(self.total_rewards) - 1rewards = 0for i in range(cur, max(-1, cur - SLIDE_LEN), -1):rewards += self.total_rewards[i]avg = rewards / min(SLIDE_LEN, len(self.total_rewards))self.avg_rewards.append(avg)def plot_rewards(self):plt.clf()plt.xlabel('Episodes')plt.ylabel('Rewards')plt.plot(self.total_rewards, color='r', label='Current')plt.plot(self.avg_rewards, color='b', label='Average')plt.legend()plt.pause(0.001)print("Episode", len(self.total_rewards))print("Current reward", self.total_rewards[-1])print("Average reward", self.avg_rewards[-1])print("Epsilon", self.eps)plt.savefig('Train.jpg')def test(self, episodes):self.eps = 0ret = 0for episode in range(episodes):total_reward = 0cur_state = env.reset()cur_state = torch.from_numpy(cur_state)for tim in count():action = self.get_action(cur_state)img = env.render(mode='rgb_array')nxt_state, reward, done, _ = env.step(action)cur_state = torch.from_numpy(nxt_state)total_reward += rewardif done or tim >= MAX_TIME:breakprint("Episode", episode+1)print("Current reward", total_reward)ret += total_rewardprint("Average reward of", episodes, "episodes:", ret / episodes)agent = Agent()
agent.train(700)
agent.test(100)env.close()

训练的reward曲线如图:

测试结果:

......

Episode 96
Current reward 210.40330984897227
Episode 97
Current reward 269.30063656546673
Episode 98
Current reward 297.40313034589826
Episode 99
Current reward 242.37884580171982
Episode 100
Current reward 235.21898442946033
Average reward of 100 episodes: 245.67580368550185

(2021.4.11二更)

最近手撸了一个Policy Gradient,因为对“根据policy来sample”的理解出现偏差,断断续续debug了好几天......

# -*- coding: utf-8 -*-
"""LunarLander - PG.ipynbAutomatically generated by Colaboratory.Original file is located athttps://colab.research.google.com/drive/16U2WE7925uWv8FMKwyP_aY6QYsbAxFJ5
"""
import random
import gym
import numpy as np
import matplotlib.pyplot as plt
from itertools import countimport torchimport torch.nn as nn
import torch.optim as optim
import torch.nn.functional as Fenv = gym.make('LunarLander-v2')
env.seed(0)
random.seed(0)
np.random.seed(0)
torch.manual_seed(0)MAX_TIME = 1000
LR = 3e-4
EPOCHS = 4000
SLIDE_LEN = 20
NOISE_RATE = 0.1
GAMMA = 0.99device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')class Net(nn.Module):def __init__(self, h1=128, h2=128):super(Net, self).__init__()self.fc1 = nn.Linear(8, h1)self.fc2 = nn.Linear(h1, h2)self.fc3 = nn.Linear(h2, 4)def forward(self, t):t = F.relu(self.fc1(t))t = F.relu(self.fc2(t))t = F.log_softmax(self.fc3(t), dim=0)return tclass Trajectory:def __init__(self):self.reward = 0self.rewards = []def __len__(self):return len(self.track)def push(self, cur_reward):self.reward += cur_rewardself.rewards.append(cur_reward)def get_suffix_sum(a, gamma=GAMMA):tmp = a[::-1]for i in range(1, len(tmp)):tmp[i] += tmp[i - 1] * gammareturn tmp[::-1]class Agent:def __init__(self):self.policy = Net()self.losses = []self.opt = optim.Adam(self.policy.parameters(), lr=LR)self.total_rewards = [0]self.avg_rewards = [0]self.action_space = [i for i in range(4)]def get_action(self, cur_state, mode='train'):output = self.policy(cur_state)# action = output.argmax()probs = torch.exp(output).detach().cpu().numpy()action = np.random.choice(self.action_space, p=probs)action = torch.tensor(action).long().to(device)# sample the action instead of taking the "optimal" so faroutput, action = output.unsqueeze(0), action.unsqueeze(0)criterion = nn.NLLLoss()loss = criterion(output, action)self.losses.append(loss)return action.item()def train_one_episode(self, device=device):self.policy.to(device)total_loss = 0self.losses.clear()cur_trajectory = Trajectory()cur_state = env.reset()cur_state = torch.from_numpy(cur_state).to(device)for tim in count():action = self.get_action(cur_state)# print(tim, action)# img = env.render(mode='rgb_array')nxt_state, reward, done, _ = env.step(action)nxt_state = torch.from_numpy(nxt_state).to(device)action = torch.tensor(action).unsqueeze(-1).to(device)reward = torch.tensor(reward).unsqueeze(-1).to(device)done = torch.tensor(1 if done else 0).unsqueeze(-1).to(device)cur_trajectory.push(reward.item())cur_state = nxt_state  # !!!if done or tim >= MAX_TIME:self.update_rewards(cur_trajectory.reward)breakreward_weight = get_suffix_sum(cur_trajectory.rewards)reward_weight = torch.from_numpy(np.array(reward_weight)).to(device)# plot_tensor(np.array(cur_trajectory.rewards), 'rewards')# plot_tensor(reward_weight.cpu().numpy(), 'discounted_suffix_reward_weight')assert len(self.losses) == len(cur_trajectory.rewards)mean = reward_weight.mean()std = reward_weight.std()reward_weight = (reward_weight - mean) / stdfor i in range(len(self.losses)):total_loss += self.losses[i] * reward_weight[i]self.plot_rewards()self.opt.zero_grad()total_loss.backward()# for name, para in self.policy.named_parameters():#     print(name, para.grad.mean())self.opt.step()self.policy.cpu()torch.save(self.policy.state_dict(), 'policy.pkl')def update_rewards(self, total_reward):self.total_rewards.append(total_reward)cur = len(self.total_rewards) - 1rewards = 0for i in range(cur, max(-1, cur - SLIDE_LEN), -1):rewards += self.total_rewards[i]avg = rewards / min(SLIDE_LEN, len(self.total_rewards))self.avg_rewards.append(avg)def plot_rewards(self):plt.clf()plt.xlabel('Episodes')plt.ylabel('Rewards')plt.plot(self.total_rewards, color='g', label='Current')plt.plot(self.avg_rewards, color='b', label='Average')plt.legend()plt.pause(0.001)print("Episode", len(self.total_rewards))print("Current reward", self.total_rewards[-1])print("Average reward", self.avg_rewards[-1])plt.savefig('Train.jpg')def train(self, epochs=EPOCHS, device=device):for epoch in range(epochs):self.train_one_episode(device)def test(self, episodes, device=device):self.policy.load_state_dict(torch.load("policy.pkl"))self.policy.to(device)ret = 0for episode in range(episodes):total_reward = 0cur_state = env.reset()cur_state = torch.from_numpy(cur_state).to(device)for tim in count():action = self.get_action(cur_state)img = env.render(mode='rgb_array')nxt_state, reward, done, _ = env.step(action)cur_state = torch.from_numpy(nxt_state).to(device)total_reward += rewardif done or tim >= MAX_TIME:breakprint("Episode", episode + 1)print("Current reward", total_reward)ret += total_rewardprint("Average reward of", episodes, "episodes:", ret / episodes)agent = Agent()
agent.train()
agent.test(20)

最后测试20论结果:

Episode 1
Current reward 84.33441682399996
Episode 2
Current reward 216.4304736195864
Episode 3
Current reward 120.18777822341227
Episode 4
Current reward 79.38344301251452
Episode 5
Current reward 149.77976608818616
Episode 6
Current reward 139.1168128341676
Episode 7
Current reward 254.51534398848398
Episode 8
Current reward 133.54801428683155
Episode 9
Current reward 186.94804010946848
Episode 10
Current reward 202.23091982552887
Episode 11
Current reward 108.69519273351803
Episode 12
Current reward 234.48558150706708
Episode 13
Current reward 233.7075148866764
Episode 14
Current reward 234.104787749662
Episode 15
Current reward 201.50699844779575
Episode 16
Current reward 235.9508429194292
Episode 17
Current reward 111.90205065489462
Episode 18
Current reward 123.62077742772023
Episode 19
Current reward 228.49487126388732
Episode 20
Current reward 130.76856178782705
Average reward of 20 episodes: 170.4856094095329

Process finished with exit code 0

其中reward不到200的基本也是平稳降落,只不过不知道为什么落地后没有关闭引擎......还在左右调整?有解决方案的大佬欢迎在评论区指点一下~

Deep Reinforcement Learning入门 - DQN/Policy Gradient实现LunarLander-v2相关推荐

  1. Policy gradient Method of Deep Reinforcement learning (Part One)

    目录 Abstract Part one: Basic knowledge Policy Environment Dynamics Policy Policy Approximation Policy ...

  2. 深度学习(19): Deep Reinforcement learning(Policy gradientinteract with environment)

    Deep Reinforcement learning AL=DL+RL Machine 观察到环境的状态,做出一些行为对环境产生影响,环境根据machine的改变给予一个reward.正向的acti ...

  3. [DQN] Playing Atari with Deep Reinforcement Learning

    论文链接:https://arxiv.org/abs/1312.5602 引用:Mnih V, Kavukcuoglu K, Silver D, et al. Playing atari with d ...

  4. 深度强化学习:入门(Deep Reinforcement Learning: Scratching the surface)

    原文链接:https://blog.csdn.net/qq_32690999/article/details/78594220 本博客是对学习李宏毅教授在youtube上传的课程视频<Deep ...

  5. 【DQN】解析 DeepMind 深度强化学习 (Deep Reinforcement Learning) 技术

    原文:http://www.jianshu.com/p/d347bb2ca53c 声明:感谢 Tambet Matiisen 的创作,这里只对最为核心的部分进行的翻译 Two years ago, a ...

  6. Deep Reinforcement Learning超简单入门项目 Pytorch实现接水果游戏AI

    学习过传统的监督和无监督学习方法后,我们现在已经可以自行开发机器学习系统来解决一些实际问题了.我们能实现一些事件的预测,一些模式的分类,还有数据的聚类等项目.但是这些好像和我们心目中的人工智能仍有差距 ...

  7. 深度强化学习综述论文 A Brief Survey of Deep Reinforcement Learning

    A Brief Survey of Deep Reinforcement Learning 深度强化学习的简要概述 作者: Kai Arulkumaran, Marc Peter Deisenroth ...

  8. 深度强化学习—— 译 Deep Reinforcement Learning(part 0: 目录、简介、背景)

    深度强化学习--概述 翻译说明 综述 1 简介 2 背景 2.1 人工智能 2.2 机器学习 2.3 深度学习 2.4 强化学习 2.4.1 Problem Setup 2.4.2 值函数 2.4.3 ...

  9. DDPG:CONTINUOUS CONTROL WITH DEEP REINFORCEMENT LEARNING

    CONTINOUS CONTROL WITH DEEP REINFORCEMENT LEARNING 论文地址 https://arxiv.org/abs/1509.02971 个人翻译,并不权威 T ...

最新文章

  1. Qt for Mac 设置软件开机自启动
  2. 看来美国的霸道不仅仅是针对Lenovo的,SONY也被威胁--索尼被判侵犯专利,PlayStation游戏机销售面临威胁【ZZ】...
  3. SQL Server Profiler的一个小问题
  4. NRF24L01跳频抗信道干扰功能探讨
  5. arduino 舵机接线图_求用5个电位器分别控制舵机的arduino的原码和连线图
  6. 前端学习(977):本地存储导读
  7. (需求实战_01)_shell脚本 ftp协议下载文件
  8. “Linaro”将推动开源软件新一波开发潮
  9. 自行编译cups绕过错误:file /etc/rc.d/rc.local from install of systemd conflicts with file from
  10. 老毛子最想固件,支持打印机了
  11. ftp连接530错误
  12. 阿拉德之怒手游超详细图文架设教程
  13. GreenDao清空数据库的方法
  14. java画笑脸_canvas画笑脸
  15. excel柱形图/条形图怎能给正负值填充不同的颜色
  16. C#读取Excel文件
  17. 《数据结构与算法分析》课程设计——迷宫问题
  18. php最新银联支付chinaPay,最新接口地址
  19. Three.js精灵模型Sprite
  20. 如何彻底删除adsafe

热门文章

  1. SAP教程之 Sap S/4HANA的未来是什么?它会取代 SAP ABAP 吗?
  2. 思迅食通天6单店升级连锁流程
  3. 在php编绘的路程中,也要毅然前行!2020年终总结!
  4. win7剪切板_win7系统如何清空剪切板 win7系统清空剪切板步骤【图文】
  5. 从“已知”到“可能”
  6. lxml和JsonPath的使用案例
  7. Symbian Belle短信备份程序
  8. ASIL等级确定与分解
  9. 调用域名注册api,查询所有域名组合脚本
  10. 【阿里聚安全·安全周刊】阿里双11技术十二讲直播预约|AWS S3配置错误曝光NSA陆军机密文件