A3C(Asynchronous Advantage Actor-Critic)中的3个A是什么意思?

Asynchronous(异步): 传统的DQN用一张网络代表一个Agent,而且Agent只与单一环境进行交互。A3C通过异步的、多线程的方法提高的学习的效率和鲁棒性。A3C搭建了一个全局参数网络,并建立了多个Worker子线程,每个Woker子线程有自己的参数网络,并且与各自的训练环境交互学习。Worker定期地将自己的参数异步更新到全局参数网络上,既保证了训练速度,也避免了采样问题(每个Worker的任务相同,但是环境细节都不一样)。

Actor-Critic:A3C的本质还是Actor-Critic,它结合了value-iteration方法(Q-learning)和policy-iteration方法(Policy Gradient)的优点。A3C网络在同一张网络上评估value function V(s) (当前环境状态的好坏)和policy π(s) (输出动作的可能性分布),仅仅在最后负责输出的全连接网络部分有所不同。

Advantage(优势):为了加速训练,A3C在更新权重时会适当的激励或者惩罚某些动作。A3C使用了advantage estimates来保证Agent知道某个action的回报到底有多好。advantage function可以表示为:Advantage: A = Q(s,a) - V(s)

因为A3C没有直接估计Q值,所以可以使用discounted returns (R)作为Q(s,a)的估计:Advantage Estimate: A = R - V(s)

工程细节

A3C的工程实现架构主要由全局网络global network和Worker组成:

AC_Network — This class contains all the Tensorflow ops to create the networks themselves.

Worker — This class contains a copy of AC_Network, an environment class, as well as all the logic for interacting with the environment, and updating the global network.

High-level code for establishing the Worker instances and running them in parallel.

实现A3C算法先要构造global network,这个网络使用CNN处理空间信息,再用LSTM处理时序依赖,最后接上value和policy的输出层次——

class AC_Network():

def __init__(self,s_size,a_size,scope,trainer):

with tf.variable_scope(scope):

#Input and visual encoding layers

self.inputs = tf.placeholder(shape=[None,s_size],dtype=tf.float32)

self.imageIn = tf.reshape(self.inputs,shape=[-1,84,84,1])

# CNN处理空间信息

self.conv1 = slim.conv2d(activation_fn=tf.nn.elu,

inputs=self.imageIn,num_outputs=16,

kernel_size=[8,8],stride=[4,4],padding='VALID')

self.conv2 = slim.conv2d(activation_fn=tf.nn.elu,

inputs=self.conv1,num_outputs=32,

kernel_size=[4,4],stride=[2,2],padding='VALID')

hidden = slim.fully_connected(slim.flatten(self.conv2),256,activation_fn=tf.nn.elu)

# LSTM 处理时序依赖

lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(256,state_is_tuple=True)

c_init = np.zeros((1, lstm_cell.state_size.c), np.float32)

h_init = np.zeros((1, lstm_cell.state_size.h), np.float32)

self.state_init = [c_init, h_init]

c_in = tf.placeholder(tf.float32, [1, lstm_cell.state_size.c])

h_in = tf.placeholder(tf.float32, [1, lstm_cell.state_size.h])

self.state_in = (c_in, h_in)

rnn_in = tf.expand_dims(hidden, [0])

step_size = tf.shape(self.imageIn)[:1]

state_in = tf.nn.rnn_cell.LSTMStateTuple(c_in, h_in)

lstm_outputs, lstm_state = tf.nn.dynamic_rnn(

lstm_cell, rnn_in, initial_state=state_in, sequence_length=step_size,

time_major=False)

lstm_c, lstm_h = lstm_state

self.state_out = (lstm_c[:1, :], lstm_h[:1, :])

rnn_out = tf.reshape(lstm_outputs, [-1, 256])

# 输出 policy 和 value

self.policy = slim.fully_connected(rnn_out,a_size,

activation_fn=tf.nn.softmax,

weights_initializer=normalized_columns_initializer(0.01),

biases_initializer=None)

self.value = slim.fully_connected(rnn_out,1,

activation_fn=None,

weights_initializer=normalized_columns_initializer(1.0),

biases_initializer=None)

建立Global Network之后,再建立若干个Worker(及其各自的参数和环境)。因为每个Worker都在单独的CPU子线程上运行,所以创建的Worker数量最好不要超过CPU线程数。

with tf.device("/cpu:0"):

# Generate global network

master_network = AC_Network(s_size,a_size,'global',None)

# Set workers ot number of available CPU threads

num_workers = multiprocessing.cpu_count()

workers = []

# Create worker classes

for i in range(num_workers):

workers.append(Worker(DoomGame(),i,s_size,a_size,trainer,saver,model_path))

with tf.Session() as sess:

coord = tf.train.Coordinator()

if load_model == True:

print 'Loading Model...'

ckpt = tf.train.get_checkpoint_state(model_path)

saver.restore(sess,ckpt.model_checkpoint_path)

else:

sess.run(tf.global_variables_initializer())

# This is where the asynchronous magic happens.

# Start the "work" process for each worker in a separate threat.

worker_threads = []

for worker in workers:

worker_work = lambda: worker.work(max_episode_length,gamma,master_network,sess,coord)

t = threading.Thread(target=(worker_work))

t.start()

worker_threads.append(t)

coord.join(worker_threads)

Global Network和Workers创建完毕后,就可以考虑异步更新的问题了。在TensorFlow的背景下,我们可以把worker上的参数赋值到global network上。

# Copies one set of variables to another.

# Used to set worker network parameters to those of global network.

def update_target_graph(from_scope,to_scope):

from_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, from_scope)

to_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, to_scope)

op_holder = []

for from_var,to_var in zip(from_vars,to_vars):

op_holder.append(to_var.assign(from_var))

return op_holder

class Worker():

def __init__(self,game,name,s_size,a_size,trainer,saver,model_path):

....

....

....

#Create the local copy of the network and the tensorflow op to copy global paramters to local network

self.local_AC = AC_Network(s_size,a_size,self.name,trainer)

self.update_local_ops = update_target_graph('global',self.name)

每个worker和各自的环境进行交互并且收集经验,经验以五元组列表(observation, action, reward, done, value) 的方式保存起来。

class Worker():

....

....

....

def work(self,max_episode_length,gamma,global_AC,sess,coord):

episode_count = 0

total_step_count = 0

print "Starting worker " + str(self.number)

with sess.as_default(), sess.graph.as_default():

while not coord.should_stop():

sess.run(self.update_local_ops)

episode_buffer = []

episode_values = []

episode_frames = []

episode_reward = 0

episode_step_count = 0

d = False

self.env.new_episode()

s = self.env.get_state().screen_buffer

episode_frames.append(s)

s = process_frame(s)

rnn_state = self.local_AC.state_init

while self.env.is_episode_finished() == False:

#Take an action using probabilities from policy network output.

a_dist,v,rnn_state = sess.run([self.local_AC.policy,self.local_AC.value,self.local_AC.state_out],

feed_dict={self.local_AC.inputs:[s],

self.local_AC.state_in[0]:rnn_state[0],

self.local_AC.state_in[1]:rnn_state[1]})

a = np.random.choice(a_dist[0],p=a_dist[0])

a = np.argmax(a_dist == a)

r = self.env.make_action(self.actions[a]) / 100.0

d = self.env.is_episode_finished()

if d == False:

s1 = self.env.get_state().screen_buffer

episode_frames.append(s1)

s1 = process_frame(s1)

else:

s1 = s

episode_buffer.append([s,a,r,s1,d,v[0,0]])

episode_values.append(v[0,0])

episode_reward += r

s = s1

total_steps += 1

episode_step_count += 1

#Specific to VizDoom. We sleep the game for a specific time.

if self.sleep_time>0:

sleep(self.sleep_time)

# If the episode hasn't ended, but the experience buffer is full, then we

# make an update step using that experience rollout.

if len(episode_buffer) == 30 and d != True and episode_step_count != max_episode_length - 1:

# Since we don't know what the true final return is, we "bootstrap" from our current

# value estimation.

v1 = sess.run(self.local_AC.value,

feed_dict={self.local_AC.inputs:[s],

self.local_AC.state_in[0]:rnn_state[0],

self.local_AC.state_in[1]:rnn_state[1]})[0,0]

v_l,p_l,e_l,g_n,v_n = self.train(global_AC,episode_buffer,sess,gamma,v1)

episode_buffer = []

sess.run(self.update_local_ops)

if d == True:

break

self.episode_rewards.append(episode_reward)

self.episode_lengths.append(episode_step_count)

self.episode_mean_values.append(np.mean(episode_values))

# Update the network using the experience buffer at the end of the episode.

v_l,p_l,e_l,g_n,v_n = self.train(global_AC,episode_buffer,sess,gamma,0.0)

当worker收集足够多的经验之后,A3C就可以用经验计算折扣回报和advantage了,并且再用它们来计算value loss和policy loss。除此之外,A3C还计算policy的熵(H) ,熵值可以用于exploration。如果policy outputs actions的概率差不多,那么熵值偏高,如果policy强烈建议某个action,那么熵值就偏低。Value Loss: L = Σ(R - V(s))²

Policy Loss: L = -log(π(s)) * A(s) - β*H(π)

worker用value loss和policy loss计算其网络参数的梯度。为了防止过度参数更新造成的policy不稳定,worker的梯度还必须经过裁剪。worker用这些梯度值更新global network的参数值。

class AC_Network():

def __init__(self,s_size,a_size,scope,trainer):

....

....

....

if scope != 'global':

self.actions = tf.placeholder(shape=[None],dtype=tf.int32)

self.actions_onehot = tf.one_hot(self.actions,a_size,dtype=tf.float32)

self.target_v = tf.placeholder(shape=[None],dtype=tf.float32)

self.advantages = tf.placeholder(shape=[None],dtype=tf.float32)

self.responsible_outputs = tf.reduce_sum(self.policy * self.actions_onehot, [1])

#Loss functions

self.value_loss = 0.5 * tf.reduce_sum(tf.square(self.target_v - tf.reshape(self.value,[-1])))

self.entropy = - tf.reduce_sum(self.policy * tf.log(self.policy))

self.policy_loss = -tf.reduce_sum(tf.log(self.responsible_outputs)*self.advantages)

self.loss = 0.5 * self.value_loss + self.policy_loss - self.entropy * 0.01

#Get gradients from local network using local losses

local_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope)

self.gradients = tf.gradients(self.loss,local_vars)

self.var_norms = tf.global_norm(local_vars)

grads,self.grad_norms = tf.clip_by_global_norm(self.gradients,40.0)

#Apply local gradients to global network

global_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, 'global')

self.apply_grads = trainer.apply_gradients(zip(grads,global_vars))

class Worker():

....

....

....

def train(self,global_AC,rollout,sess,gamma,bootstrap_value):

rollout = np.array(rollout)

observations = rollout[:,0]

actions = rollout[:,1]

rewards = rollout[:,2]

next_observations = rollout[:,3]

values = rollout[:,5]

# Here we take the rewards and values from the rollout, and use them to

# generate the advantage and discounted returns.

# The advantage function uses "Generalized Advantage Estimation"

self.rewards_plus = np.asarray(rewards.tolist() + [bootstrap_value])

discounted_rewards = discount(self.rewards_plus,gamma)[:-1]

self.value_plus = np.asarray(values.tolist() + [bootstrap_value])

advantages = rewards + gamma * self.value_plus[1:] - self.value_plus[:-1]

advantages = discount(advantages,gamma)

# Update the global network using gradients from loss

# Generate network statistics to periodically save

rnn_state = self.local_AC.state_init

feed_dict = {self.local_AC.target_v:discounted_rewards,

self.local_AC.inputs:np.vstack(observations),

self.local_AC.actions:actions,

self.local_AC.advantages:advantages,

self.local_AC.state_in[0]:rnn_state[0],

self.local_AC.state_in[1]:rnn_state[1]}

v_l,p_l,e_l,g_n,v_n,_ = sess.run([self.local_AC.value_loss,

self.local_AC.policy_loss,

self.local_AC.entropy,

self.local_AC.grad_norms,

self.local_AC.var_norms,

self.local_AC.apply_grads],

feed_dict=feed_dict)

return v_l / len(rollout),p_l / len(rollout),e_l / len(rollout), g_n,v_n

global network被更新之后,各个worker把自己的参数设置为global net的参数值,并在此基础上开始下一轮训练。

参考内容:

critic法计算_强化学习(Reinforcement learning)中Actor-Critic算法该如何深入理解?...相关推荐

  1. 永恒python强化材料_强化学习 Reinforcement Learning (莫烦 Python 教程)

    https://www.bilibili.com/video/BV13W411Y75P?p=9 Q-Learning 说到不一定做到 Sarsa:说到做到 Q-learning 是一个更勇敢的算法 q ...

  2. 强化学习(Reinforcement Learning)中的Q-Learning、DQN,面试看这篇就够了!

    文章目录 1. 什么是强化学习 2. 强化学习模型 2.1 打折的未来奖励 2.2 Q-Learning算法 2.3 Deep Q Learning(DQN) 2.3.1 神经网络的作用 2.3.2 ...

  3. 强化学习 (Reinforcement Learning)

    强化学习: 强化学习是机器学习中的一个领域,强调如何基于环境而行动,以取得最大化的预期利益.其灵感来源于心理学中的行为主义理论,即有机体如何在环境给予的奖励或惩罚的刺激下,逐步形成对刺激的预期,产生能 ...

  4. 强化学习 Reinforcement Learning(三)——是时候用 PARL 框架玩会儿 DOOM 了!!!(下)

    强化学习 Reinforcement Learning(三)-- 是时候用 PARL 框架玩会儿 DOOM 了!!!(下) 本文目录 强化学习 Reinforcement Learning(三)-- ...

  5. 强化学习(Reinforcement Learning)入门知识

    强化学习(Reinforcement Learning) 概率统计知识 1. 随机变量和观测值 抛硬币是一个随机事件,其结果为**随机变量 X ** 正面为1,反面为0,若第 i 次试验中为正面,则观 ...

  6. 强化学习(Reinforcement Learning)入门学习--01

    强化学习(Reinforcement Learning)入门学习–01 定义 Reinforcement learning (RL) is an area of machine learning in ...

  7. 强化学习Reinforcement Learning

    Abstract Abstract 背景 强化学习算法概念 背景 (1) 强化学习的历史发展 1956年Bellman提出了动态规划方法. 1977年Werbos提出只适应动态规划算法. 1988年s ...

  8. 强化学习 (Reinforcement Learning) 基础及论文资料汇总

    持续更新中... 书籍 1. <Reinforcement Learning: An Introduction>Richard S. Sutton and Andrew G.Barto , ...

  9. critic法计算_对于强化学习算法中的AC算法(Actor-Critic算法) 的一些理解

    AC算法(Actor-Critic算法)最早是由<Neuronlike Adaptive Elements That Can Solve Difficult Learning Control P ...

最新文章

  1. MVC 区域内默认控制器不能访问(Multiple types were found that match the controller named ‘Index')...
  2. 服务器控件调用JS方法
  3. ms-sql是mysql吗_mssql和mysql有哪些区别?
  4. 【并发编程】volatile简单使用
  5. Hadoop书籍和网络资源介绍
  6. django 获取外键对应数据的方式
  7. 初作学术论文存在的文献梳理和引文注释问题
  8. 忘记mysql数据库密码时进行修改方法
  9. 将RGB转换成ToWin32值
  10. @RequestMapping 和 @GetMapping @PostMapping 区别
  11. linux socket高性能服务器处理框架
  12. 2020 年热门编程语言的发展方向
  13. python用于声明类的关键字_python的with关键字
  14. sqlparameter多个赋值一行完成_HashMap源码从面试题说起:请一行一行代码描述hashmap put方法...
  15. 计算机网络基石 —— 集线器
  16. linux硬盘只读脚本 zabbix,Zabbix如何设置脚本告警
  17. 紫色飞猪:基于K8s的集群稳定架构
  18. 一个Excel题库转Word的小经验
  19. python调用百度地图,通过经纬度定位
  20. 基于 IPO 经济理论模型对目前 ICO 的理论研究和分析

热门文章

  1. xp系统启动oracle9i,Windows XP系统Oracle 9i的安装和卸载图解-Oracle
  2. 中职网络搭建 域用户密码设置
  3. c++四舍五入函数,向上取整,向下取整函数
  4. 漏洞复现 | CVE-2018-1273 Spring Data Commons 远程命令执行
  5. Vue学习记录05--创建vue项目
  6. 计算机程序员职业发展路径,IT程序员未来职业发展3大路线图(全)
  7. 什么是SMR磁盘?解读基于西部数据SMR的归档存储
  8. mpvue 使用vant van-field 无法双向绑定解决
  9. 剑指offer----C语言版----第三天
  10. 【C】C语言中02x 是什么意思