Reinforcement Learning with Code 【Code 2. Tabular Sarsa】

Reinforcement Learning with Code 【Code 2. Tabular Sarsa】

This note records how the author begin to learn RL. Both theoretical understanding and code practice are presented. Many material are referenced such as ZhaoShiyu’s Mathematical Foundation of Reinforcement Learning.
This code refers to Mofan’s reinforcement learning course.

文章目录

  • Reinforcement Learning with Code 【Code 2. Tabular Sarsa】
    • 2.1 Problem and result
    • 2.2 Environment
    • 2.3 Tabular Sarsa Algorithm
    • 2.4 Run this main
    • 2.5 Check the Q table
    • Reference

2.1 Problem and result

Please consider the problem that a little mouse (denoted by red block) wants to avoid trap (denoted by black block) to get the cheese (denoted by yellow circle). As the figure shows.

Image

This chapter aims to realize tabular Sarsa algorithm sovle this problem.

2.2 Environment

We use the tkinter package of python to build our environment to interact with agent.

import numpy as np
import time
import sys
import tkinter as tk
# if sys.version_info.major == 2: # 检查python版本是否是python2
#     import Tkinter as tk
# else:
#     import tkinter as tk


UNIT = 40   # pixels
MAZE_H = 4  # grid height
MAZE_W = 4  # grid width


class Maze(tk.Tk, object):
    def __init__(self):
        super(Maze, self).__init__()
        # Action Space
        self.action_space = ['up', 'down', 'right', 'left'] # action space 
        self.n_actions = len(self.action_space)

        # 绘制GUI
        self.title('Maze env')
        self.geometry('{0}x{1}'.format(MAZE_W * UNIT, MAZE_H * UNIT))   # 指定窗口大小 "width x height"
        self._build_maze()

    def _build_maze(self):
        self.canvas = tk.Canvas(self, bg='white',
                           height=MAZE_H * UNIT,
                           width=MAZE_W * UNIT)     # 创建背景画布

        # create grids
        for c in range(UNIT, MAZE_W * UNIT, UNIT): # 绘制列分隔线
            x0, y0, x1, y1 = c, 0, c, MAZE_H * UNIT
            self.canvas.create_line(x0, y0, x1, y1)
        for r in range(UNIT, MAZE_H * UNIT, UNIT): # 绘制行分隔线
            x0, y0, x1, y1 = 0, r, MAZE_W * UNIT, r
            self.canvas.create_line(x0, y0, x1, y1)

        # create origin 第一个方格的中心,
        origin = np.array([UNIT/2, UNIT/2]) 

        # hell1
        hell1_center = origin + np.array([UNIT * 2, UNIT])
        self.hell1 = self.canvas.create_rectangle(
            hell1_center[0] - (UNIT/2 - 5), hell1_center[1] - (UNIT/2 - 5),
            hell1_center[0] + (UNIT/2 - 5), hell1_center[1] + (UNIT/2 - 5),
            fill='black')
        # hell2
        hell2_center = origin + np.array([UNIT, UNIT * 2])
        self.hell2 = self.canvas.create_rectangle(
            hell2_center[0] - (UNIT/2 - 5), hell2_center[1] - (UNIT/2 - 5),
            hell2_center[0] + (UNIT/2 - 5), hell2_center[1] + (UNIT/2 - 5),
            fill='black')

        # create oval 绘制终点圆形
        oval_center = origin + np.array([UNIT*2, UNIT*2])
        self.oval = self.canvas.create_oval(
            oval_center[0] - (UNIT/2 - 5), oval_center[1] - (UNIT/2 - 5),
            oval_center[0] + (UNIT/2 - 5), oval_center[1] + (UNIT/2 - 5),
            fill='yellow')

        # create red rect 绘制agent红色方块,初始在方格左上角
        self.rect = self.canvas.create_rectangle(
            origin[0] - (UNIT/2 - 5), origin[1] - (UNIT/2 - 5),
            origin[0] + (UNIT/2 - 5), origin[1] + (UNIT/2 - 5),
            fill='red')

        # pack all 显示所有canvas
        self.canvas.pack()


    def get_state(self, rect):
            # convert the coordinate observation to state tuple
            # use the uniformed center as the state such as 
            # |(1,1)|(2,1)|(3,1)|...
            # |(1,2)|(2,2)|(3,2)|...
            # |(1,3)|(2,3)|(3,3)|...
            # |....
            x0,y0,x1,y1 = self.canvas.coords(rect)
            x_center = (x0+x1)/2
            y_center = (y0+y1)/2
            state = ((x_center-(UNIT/2))/UNIT + 1, (y_center-(UNIT/2))/UNIT + 1)
            return state


    def reset(self):
        self.update()
        self.after(500) # delay 500ms
        self.canvas.delete(self.rect)   # delete origin rectangle
        origin = np.array([UNIT/2, UNIT/2])
        self.rect = self.canvas.create_rectangle(
            origin[0] - (UNIT/2 - 5), origin[1] - (UNIT/2 - 5),
            origin[0] + (UNIT/2 - 5), origin[1] + (UNIT/2 - 5),
            fill='red')
        # return observation 
        return self.get_state(self.rect)   

    

    def step(self, action):
        # agent和环境进行一次交互
        s = self.get_state(self.rect)   # 获得智能体的坐标
        base_action = np.array([0, 0])
        reach_boundary = False
        if action == self.action_space[0]:   # up
            if s[1] > 1:
                base_action[1] -= UNIT
            else: # 触碰到边界reward=-1并停留在原地
                reach_boundary = True

        elif action == self.action_space[1]:   # down
            if s[1] < MAZE_H:
                base_action[1] += UNIT
            else:
                reach_boundary = True   

        elif action == self.action_space[2]:   # right
            if s[0] < MAZE_W:
                base_action[0] += UNIT
            else:
                reach_boundary = True

        elif action == self.action_space[3]:   # left
            if s[0] > 1:
                base_action[0] -= UNIT
            else:
                reach_boundary = True

        self.canvas.move(self.rect, base_action[0], base_action[1])  # move agent

        s_ = self.get_state(self.rect)  # next state

        # reward function
        if s_ == self.get_state(self.oval):     # reach the terminal
            reward = 1
            done = True
            s_ = 'success'
        elif s_ == self.get_state(self.hell1): # reach the block
            reward = -1
            s_ = 'block_1'
            done = False
        elif s_ == self.get_state(self.hell2):
            reward = -1
            s_ = 'block_2'
            done = False
        else:
            reward = 0
            done = False
            if reach_boundary:
                reward = -1

        return s_, reward, done

    def render(self):
        time.sleep(0.15)
        self.update()




if __name__ == '__main__':
    def test():
        for t in range(10):
            s = env.reset()
            print(s)
            while True:
                env.render()
                a = 'right'
                s, r, done = env.step(a)
                print(s)
                if done:
                    break
    env = Maze()
    env.after(100, test)      # 在延迟100ms后调用函数test
    env.mainloop()



This part is important that the reward function design is include, which is as follows

reward = { 1 , if reach the cheese − 1 , if reach the trap or reach the boundary 0 , others \text{reward} = \left \{ \begin{aligned} & 1, \quad \text{if reach the cheese} \\ & -1, \quad \text{if reach the trap or reach the boundary} \\ & 0, \quad \text{others} \end{aligned} \right. reward= 1,if reach the cheese1,if reach the trap or reach the boundary0,others

We need to explan some function of the class Maze.

  • First, the function _build_maze creates the inital maze location.
    In this example we use the left up coordination of each grid as the state of each block.
  • Second, the function get_state converts the coordination of each grid to numerical representation such as ( 1 , 1 ) , ( 1 , 2 ) , ⋯ (1,1),(1,2),\cdots (1,1),(1,2),.
  • Third, the function reset renew the state which means placing the mouse in the original grid.
  • Then, the function step we let the agent interact with envrionment for one step, ang get the reward after the action.
  • Then, the function render controls updating the window.

2.3 Tabular Sarsa Algorithm

import numpy as np
import pandas as pd


class RL():
    def __init__(self, actions, learning_rate=0.01, reward_decay=0.9, e_greedy=0.9):
        self.actions = actions  # action list
        self.lr = learning_rate
        self.gamma = reward_decay
        self.epsilon = e_greedy # epsilon greedy update policy
        self.q_table = pd.DataFrame(columns=self.actions, dtype=np.float64)

    def check_state_exist(self, state):
        if state not in self.q_table.index:
            # append new state to q table, use the coordinate as the observation

            # self.q_table = self.q_table.append(       # DataFrame.append is invalid
            #     pd.Series(
            #         [0]*len(self.actions),
            #         index=self.q_table.columns,
            #         name=state,
            #     )
            # )

            self.q_table = pd.concat(
                [
                self.q_table,
                pd.DataFrame(
                        data=np.zeros((1,len(self.actions))),
                        columns = self.q_table.columns,
                        index = [state]
                    )
                ]
            )

    def choose_action(self, observation):
        """
            Use the epsilon-greedy method to update policy
        """
        self.check_state_exist(observation)
        # action selection
            # epsilon greedy algorithm
        if np.random.uniform() < self.epsilon:
            
            state_action = self.q_table.loc[observation, :]
            # some actions may have the same value, randomly choose on in these actions
            # state_action == np.max(state_action) generate bool mask
            # choose best action
            action = np.random.choice(state_action[state_action == np.max(state_action)].index)
        else:
            # choose random action
            action = np.random.choice(self.actions)
        return action

    def learn(self, s, a, r, s_):
        pass



class SarsaTable(RL):
    """
        Implement Sarsa algorithm which is on-policy
    """
    def __init__(self, actions, learning_rate=0.01, reward_decay=0.9, e_greedy=0.9):
        super(SarsaTable,self).__init__(actions, learning_rate, reward_decay, e_greedy)

    def learn(self, s, a, r, s_, a_):
        self.check_state_exist(s_)
        q_predict = self.q_table.loc[s, a]
        if s_ != 'success' :
            q_target = r + self.gamma * self.q_table.loc[s_, a_]  # next state is not terminal
        else:
            q_target = r  # next state is terminal
        self.q_table.loc[s, a] += self.lr * (q_target - q_predict)  # update

We store the Q-table as a DataFrame of pandas. The explanation of the functions are as follows.

  • First, the function check_state_exist check the existence of one state, if not we append it to the Q-table. This is because once the state-action pair is visited, then we update it into the Q-table.
  • Second, the function choose_action is following the ϵ \epsilon ϵ-greedy algorithm

π ( a ∣ s ) = { 1 − ϵ ∣ A ( s ) ∣ ( ∣ A ( s ) ∣ − 1 ) , for the geedy action ϵ ∣ A ( s ) ∣ , for the other  ∣ A ( s ) ∣ − 1  actions \pi(a|s) = \left \{ \begin{aligned} 1 - \frac{\epsilon}{|\mathcal{A}(s)|}(|\mathcal{A(s)}|-1), & \quad \text{for the geedy action} \\ \frac{\epsilon}{|\mathcal{A}(s)|}, & \quad \text{for the other } |\mathcal{A}(s)|-1 \text{ actions} \end{aligned} \right. π(as)= 1A(s)ϵ(A(s)1),A(s)ϵ,for the geedy actionfor the other A(s)1 actions

  • Third, the function learn is update the q value as Q-learning algorithm purposed, which relays on the sample ( s t , a t , r t + 1 , s t + 1 , a t + 1 ) \textcolor{red}{(s_t,a_t,r_{t+1},s_{t+1},a_{t+1})} (st,at,rt+1,st+1,at+1). The sample denotes current state, current action, immediate reward, next state and next action respectively.

Sarsa : { q t + 1 ( s t , a t ) = q t ( s t , a t ) − α t ( s t , a t ) [ q t ( s t , a t ) − ( r t + 1 + γ   q t ( s t + 1 , a t + 1 ) ) ] q t + 1 ( s , a ) = q t ( s , a ) , for all  ( s , a ) ≠ ( s t , a t ) \text{Sarsa} : \left \{ \begin{aligned} \textcolor{red}{q_{t+1}(s_t,a_t)} & \textcolor{red}{= q_t(s_t,a_t) - \alpha_t(s_t,a_t) \Big[q_t(s_t,a_t) - (r_{t+1}+ \gamma \ q_t(s_{t+1},a_{t+1})) \Big]} \\ \textcolor{red}{q_{t+1}(s,a)} & \textcolor{red}{= q_t(s,a)}, \quad \text{for all } (s,a) \ne (s_t,a_t) \end{aligned} \right. Sarsa: qt+1(st,at)qt+1(s,a)=qt(st,at)αt(st,at)[qt(st,at)(rt+1+γ qt(st+1,at+1))]=qt(s,a),for all (s,a)=(st,at)

2.4 Run this main

Run this main script that we can run the all codes.

from maze_env_custom import Maze
from RL_brain import SarsaTable

MAX_EPISODE = 30


def update():
    for episode in range(MAX_EPISODE):
        # initial observation, observation is the rect's coordiante
        # observation is [x0,y0, x1,y1]
        observation = env.reset()   

        # RL choose action based on observation ['up', 'down', 'right', 'left']
        action = RL.choose_action(str(observation))

        while True:
            # fresh env
            env.render()

            # RL take action and get next observation and reward
            observation_, reward, done = env.step(action)
            

            action_ = RL.choose_action(str(observation_))


            # RL learn from this transition
            RL.learn(str(observation), action, reward, str(observation_), action_)

            # swap observation
            observation = observation_
            action = action_

            # break while loop when end of this episode
            if done:
                break

        # show q_table
        print(RL.q_table)
        print('\n')

    # end of game
    print('game over')
    env.destroy()

if __name__ == "__main__":
    env = Maze()
    RL = SarsaTable(env.action_space)

    env.after(100, update)
    env.mainloop()

2.5 Check the Q table

After a long run we can check the q-table to judge wheter the learning is reasonable. The q-table is as follows:

                      up      down     right          left
(1.0, 1.0) -6.837352e-02 -0.000135 -0.000266 -2.970185e-02
(2.0, 1.0) -4.901299e-02 -0.000334 -0.000484 -6.039572e-04
(2.0, 2.0) -3.988164e-04 -0.049010 -0.038785 -2.737623e-04
block_1     0.000000e+00  0.049010  0.000000  0.000000e+00
(4.0, 2.0) -2.646359e-04  0.001314 -0.019900 -1.000000e-02
(4.0, 1.0) -4.900994e-02  0.000014 -0.010000 -3.128178e-06
(3.0, 1.0) -2.970450e-02 -0.029433 -0.000516 -2.078845e-04
(1.0, 2.0) -4.933690e-04 -0.000374 -0.000951 -3.940947e-02
block_2    -1.979099e-07  0.000000  0.010000 -1.531800e-07
(1.0, 3.0) -3.525635e-04 -0.000056 -0.010000 -3.940439e-02
(1.0, 4.0) -7.194310e-07 -0.010000  0.000591 -1.990000e-02
(2.0, 4.0) -1.000000e-02 -0.019900  0.012381  0.000000e+00
(3.0, 4.0)  1.654862e-01  0.000000  0.000000  0.000000e+00
(4.0, 4.0)  0.000000e+00  0.000000 -0.010000  0.000000e+00
(4.0, 3.0)  0.000000e+00  0.000000  0.000000  5.851985e-02
success     0.000000e+00  0.000000  0.000000  0.000000e+00

For example, when at the original place if the mouse wants to move up or move left it will reach the boundary and get reward − 1 -1 1. Hence the state value in q-table is minus.


Reference

赵世钰老师的课程
莫烦ReinforcementLearning course

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/55404.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

Elasticsearch 全文检索 分词检索-Elasticsearch文章四

文章目录 官方文档地址refercence文档全文搜索体系match简单查询match 多词/分词单字段分词match多个词的逻辑控制match的匹配精度match_pharse_prefix分词前缀方式match_bool_prefixmulti_match多字段匹配 query string类型Interval类型DSL查询之Term详解聚合查询之Bucket聚合…

RTT(RT-Thread)线程管理(1.2W字详细讲解)

目录 RTT线程管理 线程管理特点 线程工作机制 线程控制块 线程属性 线程状态之间切换 线程相关操作 创建和删除线程 创建线程 删除线程 动态创建线程实例 启动线程 初始化和脱离线程 初始化线程 脱离线程 静态创建线程实例 线程辅助函数 获得当前线程 让出处…

【LeetCode】446. 等差数列划分II -- 子序列

题目链接 文章目录 1. 思路讲解1.1 dp表的创建1.2 状态转移方程1.3 使用哈希表找到k1.4 初始化1.5 返回值1.6 该题坑爹的一点 2. 代码编写 1. 思路讲解 我们要知道以某个位置为结尾的子序列的数量&#xff0c;可以通过它的以上一位置的为结尾的子序列的数量得知&#xff0c;也…

css3 hover border 流动效果

/* Hover 边线流动 */.hoverDrawLine {border: 0 !important;position: relative;border-radius: 5px;--border-color: #60daaa; } .hoverDrawLine::before, .hoverDrawLine::after {box-sizing: border-box;content: ;position: absolute;border: 2px solid transparent;borde…

Linux第八章之进程概念

一、冯诺依曼体系结构 关于冯诺依曼&#xff0c;必须强调几点&#xff1a; 这里的存储器指的是内存不考虑缓存情况&#xff0c;这里的CPU能且只能对内存进行读写&#xff0c;不能访问外设(输入或输出设备)外设(输入或输出设备)要输入或者输出数据&#xff0c;也只能写入内存或…

加强Web应用程序安全:防止SQL注入

数据库在Web应用程序中存储和组织数据时起着至关重要的作用&#xff0c;它是存储用户信息、内容和其他应用程序数据的中央存储库。而数据库实现了高效的数据检索、操作和管理&#xff0c;使Web应用程序能够向用户提供动态和个性化的内容。然而&#xff0c;数据库和网络应用程序…

SQL Developer中的Active Data Guard

这篇文章 Display Data Guard configuration in SQL Developer 中&#xff0c;用SQL Developer展示了多种ADG的拓扑。 今天自己也试了一下&#xff0c;还蛮简单的&#xff0c;其实最麻烦的部分在于搭建一个ADG环境。 假设我已有一个ADG环境&#xff0c;即最典型的环境&#x…

简要介绍 | 生成模型的演进:从自编码器(AE)到变分自编码器(VAE)和生成对抗网络(GAN),再到扩散模型

注1:本文系“简要介绍”系列之一,仅从概念上对生成模型(包括AE, VAE, GAN,以及扩散模型)进行非常简要的介绍,不适合用于深入和详细的了解。 生成模型的演进:从自编码器(AE)到变分自编码器(VAE)和生成对抗网络(GAN),再到扩散模型 一、背景介绍 生成模型在机器学习领域…

数据结构 | 线性数据结构——双端队列

目录 一、何谓双端队列 二、双端队列抽象数据类型 三、用Python实现双端队列 四、回文检测器 一、何谓双端队列 双端队列是与队列类似的有序集合。它有一前、一后两端&#xff0c;元素在其中保持自己的位置。与队列不同的是&#xff0c;双端队列对在哪一端添加和移除元素没…

Flask-SocketIO

一、简介&#xff1a; Flask-SocketIO使Flask应用程序可以实现客户端和服务器之间的低延迟双向通信。客户端应用程序可以使用 Javascript、Python、C、Java和Swift中的任何SocketIO客户端库或任何其他兼容客户端来建立与服务器的永久连接。 二、安装&#xff1a; pip instal…

《吐血整理》进阶系列教程-拿捏Fiddler抓包教程(18)-Fiddler如何接口测试,妈妈再也不担心我不会接口测试了

1.简介 Fiddler最大的优势在于抓包&#xff0c;我们大部分使用的功能也在抓包的功能上&#xff0c;fiddler做接口测试也是非常方便的。 领导或者开发给你安排接口测试的工作任务&#xff0c;但是没有给你接口文档&#xff08;由于开发周期没有时间出接口文档&#xff09;&…

【13】STM32·HAL库-正点原子SYSTEM文件夹 | SysTick工作原理、寄存器介绍 | printf函数使用、重定向

目录 1.sys文件夹介绍&#xff08;掌握&#xff09;2.deley文件夹介绍&#xff08;掌握&#xff09;2.1deley文件夹函数简介2.2SysTick工作原理2.3SysTick寄存器介绍2.4delay_init()函数&#xff08;F1&#xff09;2.5delay_us()函数&#xff08;F1&#xff09;2.6delay_ms()函…

这次,常温超导能否变为现实?

关注科研和技术的朋友近几天应当都听到韩国研发常温超导材料的消息了&#xff0c;作为攻城狮的我自然也是非常感兴趣&#xff0c;经过一番思想斗争还是放下了手上的单片机&#xff0c;想要一看这个常温超导的究竟&#xff0c;毕竟印象之中之前已经搞过好几次乌龙了。常温超导要…

el-table点击表格某一行添加到URL参数,访问带参URL加载表格内容并滚动到选中行位置 [Vue3] [Element-plus 2.3]

写在最前 需求&#xff1a;有个表格列出了一些行数据&#xff0c;每个行数据点击后会加载出对应的详细数据&#xff0c;想要在点击了某一行后&#xff0c;能够将该点击反应到URL中&#xff0c;这样我复制这个URL发给其他人&#xff0c;他们打开时也能看到同样的行数据。 url会根…

ABB机器人RAPID编程常用指令介绍1

ABB机器人RAPID编程常用指令介绍1 1. 运动控制指令 AccSet 语法格式:AccSet Acc,Ramp; Acc:机器人加速度百分比(num),默认值为100,最小为20 Ramp:机器人加速度斜坡比例(num),默认值为100,最小为10 应用:当机器人运行速度改变时,对所产生的相应加速度进行限制,使…

[Docker]入门之docker-compose

一&#xff0c;Docker-compose简介 1&#xff0c;Docker-compose简介 Docker-Compose项目是Docker官方的开源项目&#xff0c;负责实现对Docker容器集群的快速编排。 Docker-Compose将所管理的容器分为三层&#xff0c;分别是工程&#xff08;project&#xff09;&#xff0c…

C# 根据图片的EXIF自动调整图片方向

PropertyItems 代码 /// <summary>/// 根据图片exif调整方向/// </summary>/// <param name"img"></param>public void RotateImage(Bitmap img){var exif img.PropertyItems;byte orien 0;var item exif.Where(m > m.Id 274).ToArra…

Xilinx FPGA电源设计与注意事项

1 引言 随着半导体和芯片技术的飞速发展&#xff0c;现在的FPGA集成了越来越多的可配置逻辑资源、各种各样的外部总线接口以及丰富的内部RAM资源&#xff0c;使其在国防、医疗、消费电子等领域得到了越来越广泛的应用。当采用FPGA进行设计电路时&#xff0c;大多数FPGA对上电的…

html富文本编辑器

接了个单子&#xff0c;需要添加一个文章模块&#xff0c;一看用到的技术这么老&#xff0c;人傻了&#xff0c;纯html css js 。 在普通页面中 <!DOCTYPE html> <html lang"en"><head><meta charset"UTF-8"><meta name"…

DAY01_Spring简介IOC、DI入门案例Bean基础配置Bean实例化Bean生命周期依赖注入(DI配置)

目录 一 Spring1 Spring简介1.1 为什么要学1.2 学什么1.3 怎么学 2 初识Spring2.1 Spring家族2.2 Spring发展史 3 Spring体系结构问题导入3.1 Spring Framework系统架构图3.2 Spring Framework课程学习路线 4 Spring核心概念问题导入4.1 目前我们代码存在的问题4.2 核心概念 二…