Reinforcement Learning with Code 【Code 1. Tabular Q-learning】

Reinforcement Learning with Code 【Code 1. Tabular Q-learning】

This note records how the author begin to learn RL. Both theoretical understanding and code practice are presented. Many material are referenced such as ZhaoShiyu’s Mathematical Foundation of Reinforcement Learning.
This code refers to Mofan’s reinforcement learning course.

文章目录

  • Reinforcement Learning with Code 【Code 1. Tabular Q-learning】
    • 1.1 Problem and result
    • 1.2 Environment
    • 1.3 Tabular Q-learning Algorithm
    • 1.4 Run this main
    • 1.5 Check the Q table
    • Reference

1.1 Problem and result

Please consider the problem that a little mouse (denoted by red block) wants to avoid trap (denoted by black block) to get the cheese (denoted by yellow circle). As the figure shows.

Image

This chapter aims to realize tabular Q-learning algorithm sovle this problem.

1.2 Environment

We use the tkinter package of python to build our environment to interact with agent.

import numpy as np
import time
import sys
import tkinter as tk
# if sys.version_info.major == 2: # 检查python版本是否是python2
#     import Tkinter as tk
# else:
#     import tkinter as tk


UNIT = 40   # pixels
MAZE_H = 4  # grid height
MAZE_W = 4  # grid width


class Maze(tk.Tk, object):
    def __init__(self):
        super(Maze, self).__init__()
        # Action Space
        self.action_space = ['up', 'down', 'right', 'left'] # action space 
        self.n_actions = len(self.action_space)

        # 绘制GUI
        self.title('Maze env')
        self.geometry('{0}x{1}'.format(MAZE_W * UNIT, MAZE_H * UNIT))   # 指定窗口大小 "width x height"
        self._build_maze()

    def _build_maze(self):
        self.canvas = tk.Canvas(self, bg='white',
                           height=MAZE_H * UNIT,
                           width=MAZE_W * UNIT)     # 创建背景画布

        # create grids
        for c in range(UNIT, MAZE_W * UNIT, UNIT): # 绘制列分隔线
            x0, y0, x1, y1 = c, 0, c, MAZE_H * UNIT
            self.canvas.create_line(x0, y0, x1, y1)
        for r in range(UNIT, MAZE_H * UNIT, UNIT): # 绘制行分隔线
            x0, y0, x1, y1 = 0, r, MAZE_W * UNIT, r
            self.canvas.create_line(x0, y0, x1, y1)

        # create origin 第一个方格的中心,
        origin = np.array([UNIT/2, UNIT/2]) 

        # hell1
        hell1_center = origin + np.array([UNIT * 2, UNIT])
        self.hell1 = self.canvas.create_rectangle(
            hell1_center[0] - (UNIT/2 - 5), hell1_center[1] - (UNIT/2 - 5),
            hell1_center[0] + (UNIT/2 - 5), hell1_center[1] + (UNIT/2 - 5),
            fill='black')
        # hell2
        hell2_center = origin + np.array([UNIT, UNIT * 2])
        self.hell2 = self.canvas.create_rectangle(
            hell2_center[0] - (UNIT/2 - 5), hell2_center[1] - (UNIT/2 - 5),
            hell2_center[0] + (UNIT/2 - 5), hell2_center[1] + (UNIT/2 - 5),
            fill='black')

        # create oval 绘制终点圆形
        oval_center = origin + np.array([UNIT*2, UNIT*2])
        self.oval = self.canvas.create_oval(
            oval_center[0] - (UNIT/2 - 5), oval_center[1] - (UNIT/2 - 5),
            oval_center[0] + (UNIT/2 - 5), oval_center[1] + (UNIT/2 - 5),
            fill='yellow')

        # create red rect 绘制agent红色方块,初始在方格左上角
        self.rect = self.canvas.create_rectangle(
            origin[0] - (UNIT/2 - 5), origin[1] - (UNIT/2 - 5),
            origin[0] + (UNIT/2 - 5), origin[1] + (UNIT/2 - 5),
            fill='red')

        # pack all 显示所有canvas
        self.canvas.pack()


    def get_state(self, rect):
            # convert the coordinate observation to state tuple
            # use the uniformed center as the state such as 
            # |(1,1)|(2,1)|(3,1)|...
            # |(1,2)|(2,2)|(3,2)|...
            # |(1,3)|(2,3)|(3,3)|...
            # |....
            x0,y0,x1,y1 = self.canvas.coords(rect)
            x_center = (x0+x1)/2
            y_center = (y0+y1)/2
            state = ((x_center-(UNIT/2))/UNIT + 1, (y_center-(UNIT/2))/UNIT + 1)
            return state


    def reset(self):
        self.update()
        self.after(500) # delay 500ms
        self.canvas.delete(self.rect)   # delete origin rectangle
        origin = np.array([UNIT/2, UNIT/2])
        self.rect = self.canvas.create_rectangle(
            origin[0] - (UNIT/2 - 5), origin[1] - (UNIT/2 - 5),
            origin[0] + (UNIT/2 - 5), origin[1] + (UNIT/2 - 5),
            fill='red')
        # return observation 
        return self.get_state(self.rect)   

    

    def step(self, action):
        # agent和环境进行一次交互
        s = self.get_state(self.rect)   # 获得智能体的坐标
        base_action = np.array([0, 0])
        reach_boundary = False
        if action == self.action_space[0]:   # up
            if s[1] > 1:
                base_action[1] -= UNIT
            else: # 触碰到边界reward=-1并停留在原地
                reach_boundary = True

        elif action == self.action_space[1]:   # down
            if s[1] < MAZE_H:
                base_action[1] += UNIT
            else:
                reach_boundary = True   

        elif action == self.action_space[2]:   # right
            if s[0] < MAZE_W:
                base_action[0] += UNIT
            else:
                reach_boundary = True

        elif action == self.action_space[3]:   # left
            if s[0] > 1:
                base_action[0] -= UNIT
            else:
                reach_boundary = True

        self.canvas.move(self.rect, base_action[0], base_action[1])  # move agent

        s_ = self.get_state(self.rect)  # next state

        # reward function
        if s_ == self.get_state(self.oval):     # reach the terminal
            reward = 1
            done = True
            s_ = 'success'
        elif s_ == self.get_state(self.hell1): # reach the block
            reward = -1
            s_ = 'block_1'
            done = False
        elif s_ == self.get_state(self.hell2):
            reward = -1
            s_ = 'block_2'
            done = False
        else:
            reward = 0
            done = False
            if reach_boundary:
                reward = -1

        return s_, reward, done

    def render(self):
        time.sleep(0.15)
        self.update()




if __name__ == '__main__':
    def test():
        for t in range(10):
            s = env.reset()
            print(s)
            while True:
                env.render()
                a = 'right'
                s, r, done = env.step(a)
                print(s)
                if done:
                    break
    env = Maze()
    env.after(100, test)      # 在延迟100ms后调用函数test
    env.mainloop()



This part is important that the reward function design is include, which is as follows

reward = { 1 , if reach the cheese − 1 , if reach the trap or reach the boundary 0 , others \text{reward} = \left \{ \begin{aligned} & 1, \quad \text{if reach the cheese} \\ & -1, \quad \text{if reach the trap or reach the boundary} \\ & 0, \quad \text{others} \end{aligned} \right. reward= 1,if reach the cheese1,if reach the trap or reach the boundary0,others

We need to explan some function of the class Maze.

  • First, the function _build_maze creates the inital maze location.
    In this example we use the left up coordination of each grid as the state of each block.
  • Second, the function get_state converts the coordination of each grid to numerical representation such as ( 1 , 1 ) , ( 1 , 2 ) , ⋯ (1,1),(1,2),\cdots (1,1),(1,2),.
  • Third, the function reset renew the state which means placing the mouse in the original grid.
  • Then, the function step we let the agent interact with envrionment for one step, ang get the reward after the action.
  • Then, the function render controls updating the window.

1.3 Tabular Q-learning Algorithm

import numpy as np
import pandas as pd


class QLearningTable():
    def __init__(self, actions, learning_rate=0.05, reward_decay=0.9, e_greedy=0.9):
        self.actions = actions  # action list
        self.lr = learning_rate
        self.gamma = reward_decay
        self.epsilon = e_greedy # epsilon greedy update policy
        self.q_table = pd.DataFrame(columns=self.actions, dtype=np.float64)

    def check_state_exist(self, state):
        if state not in self.q_table.index:
            # append new state to q table, use the coordinate as the observation

            # self.q_table = self.q_table.append(       # DataFrame.append is invalid
            #     pd.Series(
            #         [0]*len(self.actions),
            #         index=self.q_table.columns,
            #         name=state,
            #     )
            # )

            self.q_table = pd.concat(
                [
                self.q_table,
                pd.DataFrame(
                        data=np.zeros((1,len(self.actions))),
                        columns = self.q_table.columns,
                        index = [state]
                    )
                ]
            )

    def choose_action(self, observation):
        self.check_state_exist(observation)
        # action selection
            # epsilon greedy algorithm
        if np.random.uniform() < self.epsilon:
            
            state_action = self.q_table.loc[observation, :]
            # some actions may have the same value, randomly choose on in these actions
            # state_action == np.max(state_action) generate bool mask
            # choose best action
            action = np.random.choice(state_action[state_action == np.max(state_action)].index)
        else:
            # choose random action
            action = np.random.choice(self.actions)
        return action

    def learn(self, s, a, r, s_):
        self.check_state_exist(s_)
        q_predict = self.q_table.loc[s, a]
        if s_ != 'success':
            q_target = r + self.gamma * self.q_table.loc[s_, :].max()  # next state is not terminal
        else:
            q_target = r  # next state is terminal
        self.q_table.loc[s, a] += self.lr * (q_target - q_predict)  # update

We store the Q-table as a DataFrame of pandas. The explanation of the functions are as follows.

  • First, the function check_state_exist check the existence of one state, if not we append it to the Q-table. This is because once the state-action pair is visited, then we update it into the Q-table.
  • Second, the function choose_action is following the ϵ \epsilon ϵ-greedy algorithm

π ( a ∣ s ) = { 1 − ϵ ∣ A ( s ) ∣ ( ∣ A ( s ) ∣ − 1 ) , for the geedy action ϵ ∣ A ( s ) ∣ , for the other  ∣ A ( s ) ∣ − 1  actions \pi(a|s) = \left \{ \begin{aligned} 1 - \frac{\epsilon}{|\mathcal{A}(s)|}(|\mathcal{A(s)}|-1), & \quad \text{for the geedy action} \\ \frac{\epsilon}{|\mathcal{A}(s)|}, & \quad \text{for the other } |\mathcal{A}(s)|-1 \text{ actions} \end{aligned} \right. π(as)= 1A(s)ϵ(A(s)1),A(s)ϵ,for the geedy actionfor the other A(s)1 actions

  • Third, the function learn is update the q value as Q-learning algorithm purposed.

Q-learning : { q t + 1 ( s t , a t ) = q t ( s t , a t ) − α t ( s t , a t ) [ q t ( s t , a t ) − ( r t + 1 + γ max ⁡ a ∈ A ( s t + 1 ) q t ( s t + 1 , a ) ) ] q t + 1 ( s , a ) = q t ( s , a ) , for all  ( s , a ) ≠ ( s t , a t ) \text{Q-learning} : \left \{ \begin{aligned} \textcolor{red}{q_{t+1}(s_t,a_t)} & \textcolor{red}{= q_t(s_t,a_t) - \alpha_t(s_t,a_t) \Big[q_t(s_t,a_t) - (r_{t+1}+ \gamma \max_{a\in\mathcal{A}(s_{t+1})} q_t(s_{t+1},a)) \Big]} \\ \textcolor{red}{q_{t+1}(s,a)} & \textcolor{red}{= q_t(s,a)}, \quad \text{for all } (s,a) \ne (s_t,a_t) \end{aligned} \right. Q-learning: qt+1(st,at)qt+1(s,a)=qt(st,at)αt(st,at)[qt(st,at)(rt+1+γaA(st+1)maxqt(st+1,a))]=qt(s,a),for all (s,a)=(st,at)

1.4 Run this main

Run this main script that we can run the all codes.

from maze_env_custom import Maze
from RL_brain import QLearningTable

MAX_EPISODE = 30


def update():
    for episode in range(MAX_EPISODE):
        # initial observation, observation is the rect's coordiante
        # observation is [x0,y0, x1,y1]
        observation = env.reset()   

        while True:
            # fresh env
            env.render()

            # RL choose action based on observation ['up', 'down', 'right', 'left']
            action = RL.choose_action(str(observation))

            # RL take action and get next observation and reward
            observation_, reward, done = env.step(action)

            # RL learn from this transition
            RL.learn(str(observation), action, reward, str(observation_))

            # swap observation
            observation = observation_

            # break while loop when end of this episode
            if done:
                break

        # show q_table
        print(RL.q_table)
        print('\n')

    # end of game
    print('game over')
    env.destroy()

if __name__ == "__main__":
    env = Maze()
    RL = QLearningTable(env.action_space)

    env.after(100, update)
    env.mainloop()

1.5 Check the Q table

After a long run we can check the q-table to judge wheter the learning is reasonable. The q-table is as follows:

                  up      down     right          left
(1.0, 1.0) -0.226208  0.000963  0.000000 -9.750000e-02
(1.0, 2.0)  0.000024  0.005773  0.000000 -5.000000e-02
(2.0, 1.0) -0.050000  0.000000  0.000000  5.247904e-07
(2.0, 2.0)  0.000000 -0.050000 -0.050000  0.000000e+00
block_2     0.000000  0.000000  0.000000  1.793534e-04
(2.0, 4.0) -0.097500 -0.050000  0.336315  2.916072e-03
(1.0, 4.0)  0.002162 -0.140781  0.112337 -5.000000e-02
(1.0, 3.0)  0.000008  0.033479 -0.050000 -9.739821e-02
block_1     0.000000  0.097500  0.000000  0.000000e+00
(4.0, 2.0)  0.000000  0.006525 -0.050000 -5.000000e-02
success     0.000000  0.000000  0.000000  0.000000e+00
(3.0, 1.0) -0.050000 -0.047750  0.000000  0.000000e+00
(3.0, 4.0)  0.722610 -0.050000  0.000000  1.298347e-02
(4.0, 1.0) -0.050000  0.000101 -0.050000  0.000000e+00
(4.0, 3.0)  0.000000  0.000000  0.000000  1.426250e-01

For example, when at the original place if the mouse wants to move up or move left it will reach the boundary and get reward − 1 -1 1. Hence the state value in q-table is minus.


Reference

赵世钰老师的课程
莫烦ReinforcementLearning course

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/55037.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

【Redis】内存数据库 Redis 基础

目录 内存数据库Redis概念Redis 安装Redis的启动方式Redis命令行客户端 Redis通用命令Redis key结构Redis value数据类型String 和基础操作Hash 和基础操作List 和基础操作Set 和基础操作Sorted_set 和基础操作 Redis的Java客户端Jedis客户端SpringDataRedis客户端自定义RedisT…

TypeScript基础学习

目录 一、安装 1、下载国内镜像 2、安装 3、查看安装情况 4、使用例子 二、变量声明 1、规则 2、声明的四种方式 3、注意 4、类型断言 5、类型推断 6、变量作用域 三、基础类型&#xff08;共11种&#xff09; 1、Any 类型 2、Null 和 Undefined 3、never 类型…

Thread类的常用方法

文章目录 二. Thread类及常见方法2.1 常见构造方法2.2 Thread 的几个常见属性2.3 启动一个线程 start()2.4 终止一个线程2.5 等待一个线程 join()2.6 获取当前线程的引用2.7 休眠当前线程 二. Thread类及常见方法 2.1 常见构造方法 方法说明Thread()创建线程对象Thread(Runna…

C语言每日一题:11.《数据结构》链表分割。

题目一&#xff1a; 题目链接&#xff1a; 思路一&#xff1a;使用带头链表 1.构建两个新的带头链表&#xff0c;头节点不存储数据。 2.循环遍历原来的链表。 3.小于x的尾插到第一个链表。 4.大于等于x尾插到第二个链表。 5.进行链表合并&#xff0c;注意第二个链表的尾的下一…

RISC-V 指令集介绍

1. 背景介绍 指令集从本质上可以分为复杂指令集&#xff08;Complex Instruction Set Computer&#xff0c;CISC&#xff09;和精简指令集&#xff08;Reduced Instruction Set Computer&#xff0c;RISC&#xff09;两种。复杂指令集的特点是能够在一条指令内完成很多事情。 指…

【外卖系统】分类管理业务

公共字段自动填充 需求分析 对于之前的开发中&#xff0c;有创建时间、创建人、修改时间、修改人等字段&#xff0c;在其他功能中也会有出现&#xff0c;属于公共字段&#xff0c;对于这些公共字段最好是在某个地方统一处理以简化开发&#xff0c;使用Mybatis Plus提供的公共…

iPhone 7透明屏的显示效果怎么样?

iPhone 7是苹果公司于2016年推出的一款智能手机&#xff0c;它采用了4.7英寸的Retina HD显示屏&#xff0c;分辨率为1334x750像素。 虽然iPhone 7的屏幕并不是透明的&#xff0c;但是苹果公司在设计上采用了一些技术&#xff0c;使得用户在使用iPhone 7时可以有一种透明的感觉…

28.利用fminsearch、fminunc 求解最大利润问题(matlab程序)

1.简述 1.无约束&#xff08;无条件&#xff09;的最优化 fminunc函数 : - 可用于任意函数求最小值 - 统一求最小值问题 - 如求最大值问题&#xff1a; >对函数取相反数而变成求最小值问题&#xff0c;最后把函数值取反即为函数的最大值。 使用格式如下 1.必须预先把函数存…

【Golang 接口自动化08】使用标准库httptest完成HTTP请求的Mock测试

目录 前言 http包的HandleFunc函数 http.Request/http.ResponseWriter httptest 定义被测接口 测试代码 测试执行 总结 资料获取方法 前言 Mock是一个做自动化测试永远绕不过去的话题。本文主要介绍使用标准库net/http/httptest完成HTTP请求的Mock的测试方法。 可能有…

113、单例Bean是单例模式吗?

单例Bean是单例模式吗? 通常来说,单例模式是指在一个JVM中,一个类只能构造出来一个对象,有很多方法来实现单例模式,比如懒汉模式,但是我们通常讲的单例模式有一个前提条件就是规定在一个JVM中,那如果要在两个JVM中保证单例呢?那可能就要用分布式锁这些技术,这里的重点…

性能测试基础知识(三)性能指标

性能测试基础知识&#xff08;三&#xff09;性能指标 前言一、时间特性1、响应时间2、并发数3、吞吐量&#xff08;TPS&#xff09; 二、资源特性1、CPU利用率2、内存利用率3、I/O利用率4、网络带宽使用率5、网络传输速率&#xff08;MB/s&#xff09; 三、实例场景 前言 性能…

面试总结(三)

1.进程和线程的区别 根本区别&#xff1a;进程是操作系统分配资源的最小单位&#xff1b;线程是CPU调度的最小单位所属关系&#xff1a;一个进程包含了多个线程&#xff0c;至少拥有一个主线程&#xff1b;线程所属于进程开销不同&#xff1a;进程的创建&#xff0c;销毁&…

LViT:语言与视觉Transformer在医学图像分割

论文链接&#xff1a;https://arxiv.org/abs/2206.14718 代码链接&#xff1a;GitHub - HUANGLIZI/LViT: This repo is the official implementation of "LViT: Language meets Vision Transformer in Medical Image Segmentation" (IEEE Transactions on Medical I…

华为数通HCIP-IGMP(网络组管理协议)

IGMP&#xff08;网络组管理协议&#xff09; 作用&#xff1a;维护、管理最后一跳路由器以及组播接收者之间的关系&#xff1b; 应用&#xff1a;最后一跳路由器以及组播接收者之间&#xff1b; 原理&#xff1a;当组播接收者需要接收某个组别的流量时&#xff0c;会向最后…

SpringCloud Gateway 在微服务架构下的最佳实践

作者&#xff1a;徐靖峰&#xff08;岛风&#xff09; 前言 本文整理自云原生技术实践营广州站 Meetup 的分享&#xff0c;其中的经验来自于我们团队开发的阿里云 CSB 2.0 这款产品&#xff0c;其基于开源 SpringCloud Gateway 开发&#xff0c;在完全兼容开源用法的前提下&a…

数据结构-链表

&#x1f5e1;CSDN主页&#xff1a;d1ff1cult.&#x1f5e1; &#x1f5e1;代码云仓库&#xff1a;d1ff1cult.&#x1f5e1; &#x1f5e1;文章栏目&#xff1a;数据结构专栏&#x1f5e1; 目录 目录 代码总览&#xff1a; 接口slist.h&#xff1a; slist.c: 1.什么是链表 1.1链…

消息触达平台 - 基础理论

目录 消息触达平台 背景 业务流程 触达配置 服务处理 表现展示 效果统计 触达信息结构 对象 内容 渠道 场景 机制 消息触达平台 背景 在产品生命周期的不同阶段&#xff0c;用户触达体系可以用来对不同用户群体进行定制化运营。结合咱们的日常场景&#xff0c;公司的运营同学或…

【前端知识】React 基础巩固(四十一)——手动路由跳转、参数传递及路由配置

React 基础巩固(四十一)——手动路由跳转、参数传递及路由配置 一、实现手动跳转路由 利用 useNavigate 封装一个 withRouter&#xff08;hoc/with_router.js&#xff09; import { useNavigate } from "react-router-dom"; // 封装一个高阶组件 function withRou…

vue + element UI Table 表格 利用插槽是 最后一行 操作 的边框线 不显示

在屏幕比例100%时 el-table添加border属性 使用作用域插槽 会不显示某侧的边框线&#xff0c;屏幕比例缩小或放大都展示 // 修复列的 边框线消失的bug thead th:not(.is-hidden):last-child {right:-1px;// 或者//border-left: 1px solid #ebeef5; } .el-table__row{td:not(.i…