文章目录

  • Q?V?
  • Q-learning:
  • 小code
  • 大CODE

Q learning是基于Value价值函数的RL算法,Q是一个数值表,收敛后我们通过Q表选择Q值最大的动作。
在讲解Q-learning之前我们先要介绍一下什么是Q以及与其十分相关的V

Q?V?

state-action value function(Q function)-> Qπ(s,a)Q_π(s,a)Qπ​(s,a):衡量π在s下,采取a的好坏。其中π不一定采取a,这里只是衡量如果在这采取a的情况。
state value fucntion-> Vπ(s)V_π(s)Vπ​(s): 期望累计奖励,给定π,衡量s的好坏。

Q-learning:

  • 给定参数γ\gammaγ和reward矩阵R(或reward-fuction,给定a∣sa|sa∣s得到reward)以及学习率aaa(注: 关于学习率设定,有的资料会考虑历史学习的结果,即学习率小于1;也有的资料并未考虑历史学习的结果,相当于学习率设置为1)
  • 令Q表为0或者随机分布
  • Repet(for each episode):
  • Initialize sss
    • Repeat (for each step of episode):

      • Choose possible aaa from sss use policy deriverd from QQQ (e.g. ,ε贪婪方法(ε -Greedy method) )
      • Take action aaa,get observe rrr,s′s's′
      • Choose possible a′a'a′ from s′s's′ use policy deriverd from Q′Q'Q′ (e.g. ,max 方法(ε -Greedy method) )
      • Q(s,a)=(1−a)⋅Q(st,at)+a⋅(R(s,a)+γ⋅maxat+1{Q(st+1,at+1)})Q(s,a)=(1-a) \cdot Q(s_t,a_t)+a \cdot (R(s,a)+\gamma\cdot max_{a_{t+1}} \left \{Q(s_{t+1},a_{t+1})\right \})Q(s,a)=(1−a)⋅Q(st​,at​)+a⋅(R(s,a)+γ⋅maxat+1​​{Q(st+1​,at+1​)})
      • s←s′s\leftarrow s's←s′
    • until sss is terminal
      注意:有些地方会看到类似于Q(s,a)=R(s,a)+γ⋅maxat+1{Q(st+1,at+1)}Q(s,a)=R(s,a)+\gamma\cdot max_{a_{t+1}} \left \{Q(s_{t+1},a_{t+1})\right \}Q(s,a)=R(s,a)+γ⋅maxat+1​​{Q(st+1​,at+1​)}的式子,其实就是把学习率设为1了。

γ\gammaγ的意义:这里γ\gammaγ的设置主要是用来表示对未来的reward的重视程度,γ\gammaγ越小越只注重即使reward,越接近1越注重未来reward

如果γ=1\gamma=1γ=1那么就是将之后所有的回报都考虑进去,为什么呢?
因为这个QQQ其实就是不断在逼近Value价值,就如下面这个式子表达的意思,(当学习率a=1a=1a=1时)
Q(st,at)←rt+γ⋅maxaQ(st+1,a)Q(s_t,a_t)\leftarrow r_t+\gamma \cdot max_aQ(s_{t+1},a)Q(st​,at​)←rt​+γ⋅maxa​Q(st+1​,a)

至于莫烦他说的奇奇gaygay的这句话

Q learning 的迷人之处就是 在 Q(s1, a2) 现实 中, 也包含了一个 Q(s2) 的最大估计值, 将对下一步的衰减的最大估计和当前所得到的奖励当成这一步的现实, 很奇妙吧.

我觉得这里所说的 Q(s2) 的最大估计值应该是maxa~{Q(s~,a~)}max_{\tilde{a}} \left \{Q(\tilde{s},\tilde{a})\right \}maxa~​{Q(s~,a~)}
对下一步的衰减的最大估计就是γ⋅maxa~{Q(s~,a~)}\gamma\cdot max_{\tilde{a}} \left \{Q(\tilde{s},\tilde{a})\right \}γ⋅maxa~​{Q(s~,a~)}

小code

"""
A simple example for Reinforcement Learning using table lookup Q-learning method.
An agent "o" is on the left of a 1 dimensional world, the treasure is on the rightmost location.
Run this program and to see how the agent will improve its strategy of finding the treasure.
View more on my tutorial page: https://morvanzhou.github.io/tutorials/
"""import numpy as np
import pandas as pd
import timenp.random.seed(2)  # reproducibleN_STATES = 6   # the length of the 1 dimensional world
ACTIONS = ['left', 'right']     # available actions
EPSILON = 0.9   # greedy police
ALPHA = 0.1     # learning rate
GAMMA = 0.9    # discount factor
MAX_EPISODES = 13   # maximum episodes
FRESH_TIME = 0.3    # fresh time for one movedef build_q_table(n_states, actions):table = pd.DataFrame(np.zeros((n_states, len(actions))),     # q_table initial valuescolumns=actions,    # actions's name)# print(table)    # show tablereturn tabledef choose_action(state, q_table):# This is how to choose an actionstate_actions = q_table.iloc[state, :]if (np.random.uniform() > EPSILON) or ((state_actions == 0).all()):  # act non-greedy or state-action have no valueaction_name = np.random.choice(ACTIONS)else:   # act greedyaction_name = state_actions.idxmax()    # replace argmax to idxmax as argmax means a different function in newer version of pandasreturn action_namedef get_env_feedback(S, A):# This is how agent will interact with the environmentif A == 'right':    # move rightif S == N_STATES - 2:   # terminateS_ = 'terminal'R = 1else:S_ = S + 1R = 0else:   # move leftR = 0if S == 0:S_ = S  # reach the wallelse:S_ = S - 1return S_, Rdef update_env(S, episode, step_counter):# This is how environment be updatedenv_list = ['-']*(N_STATES-1) + ['T']   # '---------T' our environmentif S == 'terminal':interaction = 'Episode %s: total_steps = %s' % (episode+1, step_counter)print('\r{}'.format(interaction), end='')time.sleep(2)print('\r                                ', end='')else:env_list[S] = 'o'interaction = ''.join(env_list)print('\r{}'.format(interaction), end='')time.sleep(FRESH_TIME)def rl():# main part of RL loopq_table = build_q_table(N_STATES, ACTIONS)for episode in range(MAX_EPISODES):step_counter = 0S = 0is_terminated = Falseupdate_env(S, episode, step_counter)while not is_terminated:A = choose_action(S, q_table)S_, R = get_env_feedback(S, A)  # take action & get next state and rewardq_predict = q_table.loc[S, A]if S_ != 'terminal':q_target = R + GAMMA * q_table.iloc[S_, :].max()   # next state is not terminalelse:q_target = R     # next state is terminalis_terminated = True    # terminate this episodeq_table.loc[S, A] += ALPHA * (q_target - q_predict)  # updateS = S_  # move to next stateupdate_env(S, episode, step_counter+1)step_counter += 1return q_tableif __name__ == "__main__":q_table = rl()print('\r\nQ-table:\n')print(q_table)

最后得到的Q表
Q-table:

location left right
0 0.000000 0.004320
1 0.000000 0.025005
2 0.000030 0.111241
3 0.000000 0.368750
4 0.027621 0.745813
5 0.000000 0.000000

我之前写过C版本的Q learning的tresure_on_right

大CODE

run_this.py

"""
Reinforcement learning maze example.Red rectangle:          explorer.
Black rectangles:       hells       [reward = -1].
Yellow bin circle:      paradise    [reward = +1].
All other states:       ground      [reward = 0].This script is the main part which controls the update method of this example.
The RL is in RL_brain.py.View more on my tutorial page: https://morvanzhou.github.io/tutorials/
"""from maze_env import Maze
from RL_brain import QLearningTabledef update():for episode in range(100):# initial observationobservation = env.reset()while True:# fresh envenv.render()# RL choose action based on observationaction = RL.choose_action(str(observation))# RL take action and get next observation and rewardobservation_, reward, done = env.step(action)# RL learn from this transitionRL.learn(str(observation), action, reward, str(observation_))# swap observationobservation = observation_# break while loop when end of this episodeif done:break# end of gameprint('game over')env.destroy()if __name__ == "__main__":env = Maze()RL = QLearningTable(actions=list(range(env.n_actions)))env.after(100, update)env.mainloop()

RL_brain.py

"""
This part of code is the Q learning brain, which is a brain of the agent.
All decisions are made in here.View more on my tutorial page: https://morvanzhou.github.io/tutorials/
"""import numpy as np
import pandas as pdclass QLearningTable:def __init__(self, actions, learning_rate=0.01, reward_decay=0.9, e_greedy=0.9):self.actions = actions  # a listself.lr = learning_rateself.gamma = reward_decayself.epsilon = e_greedyself.q_table = pd.DataFrame(columns=self.actions, dtype=np.float64)def choose_action(self, observation):self.check_state_exist(observation)# action selectionif np.random.uniform() < self.epsilon:# choose best actionstate_action = self.q_table.loc[observation, :]# some actions may have the same value, randomly choose on in these actionsaction = np.random.choice(state_action[state_action == np.max(state_action)].index)else:# choose random actionaction = np.random.choice(self.actions)return actiondef learn(self, s, a, r, s_):self.check_state_exist(s_)q_predict = self.q_table.loc[s, a]if s_ != 'terminal':q_target = r + self.gamma * self.q_table.loc[s_, :].max()  # next state is not terminalelse:q_target = r  # next state is terminalself.q_table.loc[s, a] += self.lr * (q_target - q_predict)  # updatedef check_state_exist(self, state):if state not in self.q_table.index:# append new state to q tableself.q_table = self.q_table.append(pd.Series([0]*len(self.actions),index=self.q_table.columns,name=state,))

maze_env.py

"""
Reinforcement learning maze example.Red rectangle:          explorer.
Black rectangles:       hells       [reward = -1].
Yellow bin circle:      paradise    [reward = +1].
All other states:       ground      [reward = 0].This script is the environment part of this example. The RL is in RL_brain.py.View more on my tutorial page: https://morvanzhou.github.io/tutorials/
"""import numpy as np
import time
import sys
if sys.version_info.major == 2:import Tkinter as tk
else:import tkinter as tkUNIT = 40   # pixels
MAZE_H = 4  # grid height
MAZE_W = 4  # grid widthclass Maze(tk.Tk, object):def __init__(self):super(Maze, self).__init__()self.action_space = ['u', 'd', 'l', 'r']self.n_actions = len(self.action_space)self.title('maze')self.geometry('{0}x{1}'.format(MAZE_H * UNIT, MAZE_H * UNIT))self._build_maze()def _build_maze(self):self.canvas = tk.Canvas(self, bg='white',height=MAZE_H * UNIT,width=MAZE_W * UNIT)# create gridsfor c in range(0, MAZE_W * UNIT, UNIT):x0, y0, x1, y1 = c, 0, c, MAZE_H * UNITself.canvas.create_line(x0, y0, x1, y1)for r in range(0, MAZE_H * UNIT, UNIT):x0, y0, x1, y1 = 0, r, MAZE_W * UNIT, rself.canvas.create_line(x0, y0, x1, y1)# create originorigin = np.array([20, 20])# hellhell1_center = origin + np.array([UNIT * 2, UNIT])self.hell1 = self.canvas.create_rectangle(hell1_center[0] - 15, hell1_center[1] - 15,hell1_center[0] + 15, hell1_center[1] + 15,fill='black')# hellhell2_center = origin + np.array([UNIT, UNIT * 2])self.hell2 = self.canvas.create_rectangle(hell2_center[0] - 15, hell2_center[1] - 15,hell2_center[0] + 15, hell2_center[1] + 15,fill='black')# create ovaloval_center = origin + UNIT * 2self.oval = self.canvas.create_oval(oval_center[0] - 15, oval_center[1] - 15,oval_center[0] + 15, oval_center[1] + 15,fill='yellow')# create red rectself.rect = self.canvas.create_rectangle(origin[0] - 15, origin[1] - 15,origin[0] + 15, origin[1] + 15,fill='red')# pack allself.canvas.pack()def reset(self):self.update()time.sleep(0.5)self.canvas.delete(self.rect)origin = np.array([20, 20])self.rect = self.canvas.create_rectangle(origin[0] - 15, origin[1] - 15,origin[0] + 15, origin[1] + 15,fill='red')# return observationreturn self.canvas.coords(self.rect)def step(self, action):s = self.canvas.coords(self.rect)base_action = np.array([0, 0])if action == 0:   # upif s[1] > UNIT:base_action[1] -= UNITelif action == 1:   # downif s[1] < (MAZE_H - 1) * UNIT:base_action[1] += UNITelif action == 2:   # rightif s[0] < (MAZE_W - 1) * UNIT:base_action[0] += UNITelif action == 3:   # leftif s[0] > UNIT:base_action[0] -= UNITself.canvas.move(self.rect, base_action[0], base_action[1])  # move agents_ = self.canvas.coords(self.rect)  # next state# reward functionif s_ == self.canvas.coords(self.oval):reward = 1done = Trues_ = 'terminal'elif s_ in [self.canvas.coords(self.hell1), self.canvas.coords(self.hell2)]:reward = -1done = Trues_ = 'terminal'else:reward = 0done = Falsereturn s_, reward, donedef render(self):time.sleep(0.1)self.update()def update():for t in range(10):s = env.reset()while True:env.render()a = 1s, r, done = env.step(a)if done:breakif __name__ == '__main__':env = Maze()env.after(100, update)env.mainloop()

强化学习实践(2):Q Leaning相关推荐

  1. 强化学习实践:DDQN—LunarLander月球登入初探

    强化学习实践:DDQN-月球登入LunarLander初探 算法DDQN 实践 环境准备 GYM及PARL+paddle parl的框架结构 agent构建 搭建神经网络 replay_memory经 ...

  2. 强化学习实践四||价值迭代法2_动作价值

    强化学习实践四||价值迭代法2 Q(s,a) = 求和 p * (r + 折扣率 * maxQ(s_____ ,a_) ) 随机玩100步游戏,记录 (s,a,s_) : r 和 (s,a) : s ...

  3. 【赠书】掌握人工智能重要主题,深度强化学习实践书籍推荐

    ‍‍ 今天要给大家介绍的书是深度强化学习实践的第二版,本书的主题是强化学习(Reinforcement Learning,RL),它是机器学习(Machine Learning,ML)的一个分支,强调 ...

  4. [PARL强化学习]Sarsa和Q—learning的实现

    [PARL强化学习]Sarsa和Q-learning的实现 Sarsa和Q-learning都是利用表格法再根据MDP四元组<S,A,P,R>:S: state状态,a: action动作 ...

  5. 强化学习案例_强化学习实践案例!携程如何利用强化学习提高酒店推荐排序质量...

    作者简介: 宣云儿,携程酒店排序算法工程师,主要负责酒店排序相关的算法逻辑方案设计实施.目前主要的兴趣在于排序学习.强化学习等领域的理论与应用. 前言 目前携程酒店绝大部分排序业务中所涉及的问题,基本 ...

  6. 用于优化广告展示的深度强化学习实践

    本文使用深度强化技术来优化网站上的广告位置,以最大限度地提高用户点击的概率并增加数字营销收入. 在介绍概念的同时提供了带有代码的详细案例,可以作为在任何真实示例中实施解决方案. 流量联盟和按点击付费是 ...

  7. 强化学习实践六 :给Agent添加记忆功能

    在<强化学习>第一部分的实践中,我们主要剖析了gym环境的建模思想,随后设计了一个针对一维离散状态空间的格子世界环境类,在此基础上实现了SARSA和SARSA(λ)算法.<强化学习& ...

  8. 强化学习实践七:给Agent添加记忆功能

    在<强化学习>第一部分的实践中,我们主要剖析了gym环境的建模思想,随后设计了一个针对一维离散状态空间的格子世界环境类,在此基础上实现了SARSA和SARSA(λ)算法.<强化学习& ...

  9. 强化学习实践四:编写通用的格子世界环境类

    gym里内置了许多好玩经典的环境用于训练一个更加智能的个体,不过这些环境类绝大多数不能用来实践前五讲的视频内容,主要是由于这些环境类的观测空间的某个维度是连续变量而不是离散变量,这是前五讲内容还未涉及 ...

  10. 强化学习实践三 :编写通用的格子世界环境类

    gym里内置了许多好玩经典的环境用于训练一个更加智能的个体,不过这些环境类绝大多数不能用来实践前五讲的视频内容,主要是由于这些环境类的观测空间的某个维度是连续变量而不是离散变量,这是前五讲内容还未涉及 ...

最新文章

  1. PHP特级课视频教程_第二十八集 PHP搜索代码测试_李强强
  2. Oracle Enterprise Manager 11g: Empowering IT to Drive Business Value
  3. 阿里产品专家杨文韬:你想了解的1688都在这里
  4. python类和对象详解_Python公开课 - 详解面向对象
  5. jpa 动态查询条件 数组_Spring data jpa 复杂动态查询方式总结
  6. java+stream+源码分析_java8学习之Stream源码分析
  7. 共享单车,信息安全应未雨绸缪
  8. oracle 几个字段中某个字段大于0其他字段不再进行统计?_如何深入理解MySQL 8.0直方图?...
  9. python logging 控制其他模块等级_Python常用模块:logging模块介绍
  10. 使用Firebase、Angular 8和ASP.NET Core 3.1保护网站安全
  11. Matlab imcrop函数功能小结(20190123)
  12. linux nfs iscsi,对比平台--iSCSI和NFS之间的区别
  13. python日记----2017.7.20
  14. qt mdi 子窗口关闭再打开_QT 信号的使用方法
  15. esp8266 继电器接线图_如何使用ESP8266和Android控制继电器
  16. 面试必备:VUE面试题(含答案)
  17. NXP ZigBee JN5169 DimmerLight编译过程梳理
  18. Azido-PEG8-TFP ester,1818294-49-3
  19. Vivado使用技巧(2):综合运行与OOC
  20. 大唐豪侠 -架构开发纪实

热门文章

  1. PLSQL入门与精通(第81章:利用游标进行递归调用的时候游标数超标问题)
  2. 微信小程序如何跳转到微信公众号文章,小程序如何关联公众号或订阅号
  3. 张祖勋:摄影测量的信息化与智能化
  4. Windows Server安装华硕主板的Intel i219-V网卡驱动
  5. 新一期ARM作业(七)----Nand Flash
  6. HTML+JS 画图板
  7. Word2003入门动画教程79:在Word中插入Excel图表
  8. 技术管理之如何协调加班问题
  9. 国产厂商硬件防火墙对比解析综述
  10. 干货丨产业互联网时代的边缘计算·思享会