系列文章目录
前言
本笔记本提供了使用本地 Python 绑定的 MuJoCo 物理入门教程。
版权声明
DeepMind Technologies Limited 2022 年版权所有。根据 Apache License 2.0 版(以下简称 "许可协议")授权;除非遵守许可协议,否则不得使用本文件。您可从 http://www.apache.org/licenses/LICENSE-2.0 获取许可证副本。
除非适用法律要求或书面同意,否则根据许可协议分发的软件均按 "原样 "分发,不附带任何明示或暗示的保证或条件。请参阅许可协议,了解有关许可协议下的权限和限制的具体语言。
本教程所有代码编辑器为 Jupyter Notebook
一、All imports
!pip install mujoco
# Set up GPU rendering.
from google.colab import files
import distutils.util
import os
import subprocess
if subprocess.run('nvidia-smi').returncode:
raise RuntimeError(
'Cannot communicate with GPU. '
'Make sure you are using a GPU Colab runtime. '
'Go to the Runtime menu and select Choose runtime type.')
# Add an ICD config so that glvnd can pick up the Nvidia EGL driver.
# This is usually installed as part of an Nvidia driver package, but the Colab
# kernel doesn't install its driver via APT, and as a result the ICD is missing.
# (https://github.com/NVIDIA/libglvnd/blob/master/src/EGL/icd_enumeration.md)
NVIDIA_ICD_CONFIG_PATH = '/usr/share/glvnd/egl_vendor.d/10_nvidia.json'
if not os.path.exists(NVIDIA_ICD_CONFIG_PATH):
with open(NVIDIA_ICD_CONFIG_PATH, 'w') as f:
f.write("""{
"file_format_version" : "1.0.0",
"ICD" : {
"library_path" : "libEGL_nvidia.so.0"
}
}
""")
# Configure MuJoCo to use the EGL rendering backend (requires GPU)
print('Setting environment variable to use GPU rendering:')
%env MUJOCO_GL=egl
# Check if installation was succesful.
try:
print('Checking that the installation succeeded:')
import mujoco
mujoco.MjModel.from_xml_string('<mujoco/>')
except Exception as e:
raise e from RuntimeError(
'Something went wrong during installation. Check the shell output above '
'for more information.\n'
'If using a hosted Colab runtime, make sure you enable GPU acceleration '
'by going to the Runtime menu and selecting "Choose runtime type".')
print('Installation successful.')
# Other imports and helper functions
import time
import itertools
import numpy as np
# Graphics and plotting.
print('Installing mediapy:')
!command -v ffmpeg >/dev/null || (apt update && apt install -y ffmpeg)
!pip install -q mediapy
import mediapy as media
import matplotlib.pyplot as plt
# More legible printing from numpy.
np.set_printoptions(precision=3, suppress=True, linewidth=100)
from IPython.display import clear_output
clear_output()
二、MuJoCo 基础知识
我们首先定义并加载一个简单的模型:
import mujoco
xml = """
<mujoco>
<worldbody>
<geom name="red_box" type="box" size=".2 .2 .2" rgba="1 0 0 1"/>
<geom name="green_sphere" pos=".2 .2 .2" size=".1" rgba="0 1 0 1"/>
</worldbody>
</mujoco>
"""
model = mujoco.MjModel.from_xml_string(xml)
xml 字符串是用 MuJoCo 的 MJCF 编写的,这是一种基于 XML 的建模语言。
- 唯一需要的元素是 <mujoco>。最小的有效 MJCF 模型是 <mujoco/>,它是一个完全空的模型。
- 所有物理元素都位于 <worldbody> 中,而 <worldbody> 始终是顶层主体,并构成笛卡尔坐标中的全局原点。
- 我们在世界中定义了两个几何体,分别命名为 red_box 和 green_sphere。
- 问题 red_box 没有位置,green_sphere 没有类型,这是为什么?
- 答:MJCF 属性有默认值: MJCF 属性有默认值。默认位置为 0 0 0,默认几何类型为球体。MJCF 语言在文档的 XML 参考章节中有描述。
from_xml_string() 方法调用模型编译器,创建二进制 mjModel 实例。
2.1 mjModel
MuJoCo 的 mjModel 包含模型描述,即所有不随时间变化的量。mjModel 的完整描述可在头文件 mjmodel.h 的末尾找到。请注意,头文件包含简短、有用的内联注释,描述了每个字段。
在 mjModel 中可以找到的量的例子有 ngeom(场景中的 geoms 数量)和 geom_rgba(它们各自的颜色):
model.ngeom
model.geom_rgba
2.1.1 命名访问
MuJoCo Python 绑定提供了使用名称的便捷访问器。在没有名称字符串的情况下调用 model.geom() 访问器会产生一个方便的错误,告诉我们有效的名称是什么。
try:
model.geom()
except KeyError as e:
print(e)
在不指定属性的情况下调用已命名的访问器,会告诉我们所有有效的属性是什么:
model.geom('green_sphere')
让我们读取 green_sphere 的 rgba 值:
model.geom('green_sphere').rgba
该功能是 MuJoCo 的 mj_name2id 函数的快捷方式:
id = mujoco.mj_name2id(model, mujoco.mjtObj.mjOBJ_GEOM, 'green_sphere')
model.geom_rgba[id, :]
同样,只读 id 和 name 属性可用于将 id 转换为 name,然后再转换回来:
print('id of "green_sphere": ', model.geom('green_sphere').id)
print('name of geom 1: ', model.geom(1).name)
print('name of body 0: ', model.body(0).name)
请注意,第 0 个机构始终是世界。它不能重命名。
id 和 name 属性在 Python 解析中很有用:
[model.geom(i).name for i in range(model.ngeom)]
2.2 mjData
mjData 包含状态和与之相关的量。状态由时间、广义位置和广义速度组成。它们分别是 data.time、data.qpos 和 data.qvel。要创建一个新的 mjData,我们只需要我们的 mjModel
data = mujoco.MjData(model)
mjData 还包含状态的函数,例如物体在世界坐标系中的笛卡尔位置。我们两个几何体的(x、y、z)位置在 data.geom_xpos 中:
print(data.geom_xpos)
等等,为什么我们的两个几何体都在原点?我们不是偏移了绿球吗?答案是 mjData 中的派生量需要显式传播(见下文)。在我们的例子中,所需的最小函数是 mj_kinematics,它可以计算所有对象(不包括摄像机和灯光)的全局笛卡尔姿态。
mujoco.mj_kinematics(model, data)
print('raw access:\n', data.geom_xpos)
# MjData also supports named access:
print('\nnamed access:\n', data.geom('green_sphere').xpos)
2.3 基本渲染、仿真和动画
为了进行渲染,我们需要实例化一个渲染器对象并调用其渲染方法。
我们还将重新加载模型,使 colab 的各个部分相互独立。
import mediapy as media
xml = """
<mujoco>
<worldbody>
<geom name="red_box" type="box" size=".2 .2 .2" rgba="1 0 0 1"/>
<geom name="green_sphere" pos=".2 .2 .2" size=".1" rgba="0 1 0 1"/>
</worldbody>
</mujoco>
"""
# Make model and data
model = mujoco.MjModel.from_xml_string(xml)
data = mujoco.MjData(model)
# Make renderer, render and show the pixels
renderer = mujoco.Renderer(model)
media.show_image(renderer.render())
嗯?为什么是黑色像素?
答案是 原因同上,我们首先需要传播 mjData 中的值。这一次,我们将调用 mj_forward,它调用了计算加速度之前的整个流水线,即计算 ,其中 是状态。这个函数所做的比我们实际需要的要多,但除非我们想节省计算时间,否则调用 mj_forward 是个不错的做法,因为这样我们就知道没有遗漏任何东西。
我们还需要更新 mjvScene,这是一个由呈现器持有的描述视觉场景的对象。我们稍后会看到,场景可以包括不属于物理模型的视觉对象。
mujoco.mj_forward(model, data)
renderer.update_scene(data)
media.show_image(renderer.render())
虽然成功了,但图像有点暗。让我们添加一束光,然后重新渲染。
xml = """
<mujoco>
<worldbody>
<light name="top" pos="0 0 1"/>
<geom name="red_box" type="box" size=".2 .2 .2" rgba="1 0 0 1"/>
<geom name="green_sphere" pos=".2 .2 .2" size=".1" rgba="0 1 0 1"/>
</worldbody>
</mujoco>
"""
model = mujoco.MjModel.from_xml_string(xml)
data = mujoco.MjData(model)
renderer = mujoco.Renderer(model)
mujoco.mj_forward(model, data)
renderer.update_scene(data)
media.show_image(renderer.render())
好多了!
请注意,mjModel 实例中的所有值都是可写的。虽然一般不建议这样做,而是修改 XML 中的值,因为这样很容易创建一个无效的模型,但有些值是可以安全写入的,例如颜色:
# Run this cell multiple times for different colors
model.geom('red_box').rgba[:3] = np.random.rand(3)
renderer.update_scene(data)
media.show_image(renderer.render())
2.4 仿真
现在我们来仿真并制作一段视频。我们将使用 MuJoCo 的主要高级函数 mj_step,它将状态步进为 .
请注意,在下面的代码块中,每次调用 mj_step 之后,我们都不会进行渲染。这是因为默认的时间步长是 2ms,而我们想要的是 60fps 的视频,而不是 500fps。
duration = 3.8 # (seconds)
framerate = 60 # (Hz)
# Simulate and display video.
frames = []
mujoco.mj_resetData(model, data) # Reset state and time.
while data.time < duration:
mujoco.mj_step(model, data)
if len(frames) < data.time * framerate:
renderer.update_scene(data)
pixels = renderer.render()
frames.append(pixels)
media.show_video(frames, fps=framerate)
嗯,视频在播放,但什么都没动,这是为什么?
这是因为这个模型没有自由度(DoF)。可以移动的物体(具有惯性)称为体。我们通过为体添加关节来增加自由度,指定体如何相对于其父体移动。让我们创建一个包含几何体的新体,添加一个铰链关节并重新渲染,同时使用可视化选项对象 MjvOption 可视化关节轴。
xml = """
<mujoco>
<worldbody>
<light name="top" pos="0 0 1"/>
<body name="box_and_sphere" euler="0 0 -30">
<joint name="swing" type="hinge" axis="1 -1 0" pos="-.2 -.2 -.2"/>
<geom name="red_box" type="box" size=".2 .2 .2" rgba="1 0 0 1"/>
<geom name="green_sphere" pos=".2 .2 .2" size=".1" rgba="0 1 0 1"/>
</body>
</worldbody>
</mujoco>
"""
model = mujoco.MjModel.from_xml_string(xml)
data = mujoco.MjData(model)
renderer = mujoco.Renderer(model)
# enable joint visualization option:
scene_option = mujoco.MjvOption()
scene_option.flags[mujoco.mjtVisFlag.mjVIS_JOINT] = True
duration = 3.8 # (seconds)
framerate = 60 # (Hz)
frames = []
mujoco.mj_resetData(model, data)
while data.time < duration:
mujoco.mj_step(model, data)
if len(frames) < data.time * framerate:
renderer.update_scene(data, scene_option=scene_option)
pixels = renderer.render()
frames.append(pixels)
# Simulate and display video.
media.show_video(frames, fps=framerate)
请注意,我们使用指令 euler="0 0 -30 "将 box_and_sphere 主体绕 Z 轴(垂直轴)旋转了 30°。这样做是为了强调运动学树中元素的姿势始终是相对于其父体而言的,因此我们的两个几何体也通过这种变换进行了旋转。
物理选项位于 mjModel.opt 中,例如时间步长:
model.opt.timestep
让我们翻转重力,重新渲染:
print('default gravity', model.opt.gravity)
model.opt.gravity = (0, 0, 10)
print('flipped gravity', model.opt.gravity)
frames = []
mujoco.mj_resetData(model, data)
while data.time < duration:
mujoco.mj_step(model, data)
if len(frames) < data.time * framerate:
renderer.update_scene(data, scene_option=scene_option)
pixels = renderer.render()
frames.append(pixels)
media.show_video(frames, fps=60)
我们也可以使用 XML 中的顶级 <option> 元素来实现这一功能:
<mujoco>
<option gravity="0 0 10"/>
...
</mujoco>
2.5 了解自由度
在现实世界中,所有刚性物体都有 6 个自由度:3 个平移和 3 个旋转。现实世界中的关节就像约束一样,消除了由关节连接的物体的相对自由度。一些物理仿真软件使用这种被称为 "笛卡尔 "或 "减法 "表示法,但其效率很低。MuJoCo 使用的是一种称为 "拉格朗日"、"广义 "或 "加法 "的表示法,根据这种表示法,除非使用关节明确添加,否则物体没有自由度。
我们的模型只有一个铰链关节,只有一个自由度,整个状态由该关节的角度和角速度定义。这就是系统的广义位置和速度。
print('Total number of DoFs in the model:', model.nv)
print('Generalized positions:', data.qpos)
print('Generalized velocities:', data.qvel)
MuJoCo 使用广义坐标的原因是,在渲染或读取对象的全局姿态之前,需要调用一个函数(例如 mj_forward)--笛卡尔位置是从广义位置导出的,需要明确计算。
三、举例说明: 使用自反 "尖顶 "仿真自由体
自由体是指具有 6 个 DoFs(即 3 个平移和 3 个旋转)的自由关节的物体。我们可以给我们的 "盒子与球体 "一个自由关节,然后看着它坠落,但让我们看看更有趣的东西。小陀螺 "是一种会自己翻转的旋转玩具(视频,维基百科)。我们对其进行如下建模:
tippe_top = """
"""
model = mujoco.MjModel.from_xml_string(tippe_top)
renderer = mujoco.Renderer(model)
data = mujoco.MjData(model)
mujoco.mj_forward(model, data)
renderer.update_scene(data, camera="closeup")
media.show_image(renderer.render())
请注意该模型定义的几个新特征:
- 使用 <freejoint/> 子句添加了一个 6-DoF 自由度关节。
- 我们使用 <option/> 子句将积分器设置为 4 阶 Runge Kutta。与默认的欧拉积分器相比,Runge-Kutta 具有更高的收敛速度,在许多情况下可以提高给定时间步长下的精度。
- 我们在 <asset/> 子句中定义了地板的网格材料,并在 "floor "geom 中对其进行引用。
- 我们使用一个名为 "ballast "的不可见、不碰撞的盒状几何体来降低顶部的质量中心。低质量中心是发生翻转行为的必要条件(与直觉相反)。
- 我们将初始旋转状态保存为一个关键帧。它绕 Z 轴的旋转速度很高,但与世界的方向并不完全一致,这就引入了翻转所需的对称性破坏。
- 我们在模型中定义了一个 <camera>(摄像机),然后使用 update_scene() 的摄像机参数对其进行渲染。让我们检查一下状态:
print('positions', data.qpos)
print('velocities', data.qvel)
速度很容易解释,6 个零,每个自由度一个。那么长度 7 的位置呢?我们可以看到身体最初的 2 厘米高度;随后的 4 个数字是三维方向,由单位四元数定义。三维方向用 4 个数字表示,而角速度用 3 个数字表示。更多信息,请参阅维基百科上关于四元数和空间旋转的文章。
让我们来制作一段视频:
duration = 7 # (seconds)
framerate = 60 # (Hz)
# Simulate and display video.
frames = []
mujoco.mj_resetDataKeyframe(model, data, 0) # Reset the state to keyframe 0
while data.time < duration:
mujoco.mj_step(model, data)
if len(frames) < data.time * framerate:
renderer.update_scene(data, "closeup")
pixels = renderer.render()
frames.append(pixels)
media.show_video(frames, fps=framerate)
3.1 测量 mjData 中的值
如上所述,mjData 结构包含仿真产生的动态变量和中间结果,预计在每个时间步上都会发生变化。下面我们仿真 2000 个时间步,并绘制出茎顶和茎高的角速度与时间的函数关系图。
timevals = []
angular_velocity = []
stem_height = []
# Simulate and save data
mujoco.mj_resetDataKeyframe(model, data, 0)
while data.time < duration:
mujoco.mj_step(model, data)
timevals.append(data.time)
angular_velocity.append(data.qvel[3:6].copy())
stem_height.append(data.geom_xpos[2,2]);
dpi = 120
width = 600
height = 800
figsize = (width / dpi, height / dpi)
_, ax = plt.subplots(2, 1, figsize=figsize, dpi=dpi, sharex=True)
ax[0].plot(timevals, angular_velocity)
ax[0].set_title('angular velocity')
ax[0].set_ylabel('radians / second')
ax[1].plot(timevals, stem_height)
ax[1].set_xlabel('time (seconds)')
ax[1].set_ylabel('meters')
_ = ax[1].set_title('stem height')
3.2 举例说明: 混沌摆
下面是一个混沌摆的模型,与旧金山探索馆的这个模型类似。
chaotic_pendulum = """
"""
model = mujoco.MjModel.from_xml_string(chaotic_pendulum)
renderer = mujoco.Renderer(model, 480, 640)
data = mujoco.MjData(model)
mujoco.mj_forward(model, data)
renderer.update_scene(data, camera="fixed")
media.show_image(renderer.render())
3.2.1 计时
让我们来观看一段视频,看看它是如何工作的:
# setup
n_seconds = 6
framerate = 30 # Hz
n_frames = int(n_seconds * framerate)
frames = []
renderer = mujoco.Renderer(model, 240, 320)
# set initial state
mujoco.mj_resetData(model, data)
data.joint('root').qvel = 10
# simulate and record frames
frame = 0
sim_time = 0
render_time = 0
n_steps = 0
for i in range(n_frames):
while data.time * framerate < i:
tic = time.time()
mujoco.mj_step(model, data)
sim_time += time.time() - tic
n_steps += 1
tic = time.time()
renderer.update_scene(data, "fixed")
frame = renderer.render()
render_time += time.time() - tic
frames.append(frame)
# print timing and play video
step_time = 1e6*sim_time/n_steps
step_fps = n_steps/sim_time
print(f'simulation: {step_time:5.3g} μs/step ({step_fps:5.0f}Hz)')
frame_time = 1e6*render_time/n_frames
frame_fps = n_frames/render_time
print(f'rendering: {frame_time:5.3g} μs/frame ({frame_fps:5.0f}Hz)')
print('\n')
# show video
media.show_video(frames, fps=framerate)
请注意,渲染比仿真物理要慢得多。
3.2.2 混沌
这是一个混沌系统(初始条件中的微小扰动会迅速累积):
PERTURBATION = 1e-7
SIM_DURATION = 10 # seconds
NUM_REPEATS = 8
# preallocate
n_steps = int(SIM_DURATION / model.opt.timestep)
sim_time = np.zeros(n_steps)
angle = np.zeros(n_steps)
energy = np.zeros(n_steps)
# prepare plotting axes
_, ax = plt.subplots(2, 1, figsize=(8, 6), sharex=True)
# simulate NUM_REPEATS times with slightly different initial conditions
for _ in range(NUM_REPEATS):
# initialize
mujoco.mj_resetData(model, data)
data.qvel[0] = 10 # root joint velocity
# perturb initial velocities
data.qvel[:] += PERTURBATION * np.random.randn(model.nv)
# simulate
for i in range(n_steps):
mujoco.mj_step(model, data)
sim_time[i] = data.time
angle[i] = data.joint('root').qpos
energy[i] = data.energy[0] + data.energy[1]
# plot
ax[0].plot(sim_time, angle)
ax[1].plot(sim_time, energy)
# finalize plot
ax[0].set_title('root angle')
ax[0].set_ylabel('radian')
ax[1].set_title('total energy')
ax[1].set_ylabel('Joule')
ax[1].set_xlabel('second')
plt.tight_layout()
3.2.3 时间表和准确性
问题 为什么能量会发生变化?没有摩擦或阻尼,这个系统应该节省能量。
答:因为时间离散化: 因为时间离散化。
如果我们减小时间步长,就能获得更高的精度和更好的能量守恒:
SIM_DURATION = 10 # (seconds)
TIMESTEPS = np.power(10, np.linspace(-2, -4, 5))
# prepare plotting axes
_, ax = plt.subplots(1, 1)
for dt in TIMESTEPS:
# set timestep, print
model.opt.timestep = dt
# allocate
n_steps = int(SIM_DURATION / model.opt.timestep)
sim_time = np.zeros(n_steps)
energy = np.zeros(n_steps)
# initialize
mujoco.mj_resetData(model, data)
data.qvel[0] = 9 # root joint velocity
# simulate
print('{} steps at dt = {:2.2g}ms'.format(n_steps, 1000*dt))
for i in range(n_steps):
mujoco.mj_step(model, data)
sim_time[i] = data.time
energy[i] = data.energy[0] + data.energy[1]
# plot
ax.plot(sim_time, energy, label='timestep = {:2.2g}ms'.format(1000*dt))
# finalize plot
ax.set_title('energy')
ax.set_ylabel('Joule')
ax.set_xlabel('second')
ax.legend(frameon=True);
plt.tight_layout()
3.2.4 时间步长和发散
当我们增加时间步长时,仿真会迅速发散:
SIM_DURATION = 10 # (seconds)
TIMESTEPS = np.power(10, np.linspace(-2, -1.5, 7))
# get plotting axes
ax = plt.gca()
for dt in TIMESTEPS:
# set timestep
model.opt.timestep = dt
# allocate
n_steps = int(SIM_DURATION / model.opt.timestep)
sim_time = np.zeros(n_steps)
energy = np.zeros(n_steps) * np.nan
speed = np.zeros(n_steps) * np.nan
# initialize
mujoco.mj_resetData(model, data)
data.qvel[0] = 11 # set root joint velocity
# simulate
print('simulating {} steps at dt = {:2.2g}ms'.format(n_steps, 1000*dt))
for i in range(n_steps):
mujoco.mj_step(model, data)
if data.warning.number.any():
warning_index = np.nonzero(data.warning.number)[0][0]
warning = mujoco.mjtWarning(warning_index).name
print(f'stopped due to divergence ({warning}) at timestep {i}.\n')
break
sim_time[i] = data.time
energy[i] = sum(abs(data.qvel))
speed[i] = np.linalg.norm(data.qvel)
# plot
ax.plot(sim_time, energy, label='timestep = {:2.2g}ms'.format(1000*dt))
ax.set_yscale('log')
# finalize plot
ax.set_ybound(1, 1e3)
ax.set_title('energy')
ax.set_ylabel('Joule')
ax.set_xlabel('second')
ax.legend(frameon=True, loc='lower right');
plt.tight_layout()
3.2.5 接触
让我们回到 "方框和球体 "的示例,让它自由连接:
free_body_MJCF = """
"""
model = mujoco.MjModel.from_xml_string(free_body_MJCF)
renderer = mujoco.Renderer(model, 400, 600)
data = mujoco.MjData(model)
mujoco.mj_forward(model, data)
renderer.update_scene(data, "fixed")
media.show_image(renderer.render())
让这个体在地板上慢速滚动,同时想象接触点和力量:
n_frames = 200
height = 240
width = 320
frames = []
renderer = mujoco.Renderer(model, height, width)
# visualize contact frames and forces, make body transparent
options = mujoco.MjvOption()
mujoco.mjv_defaultOption(options)
options.flags[mujoco.mjtVisFlag.mjVIS_CONTACTPOINT] = True
options.flags[mujoco.mjtVisFlag.mjVIS_CONTACTFORCE] = True
options.flags[mujoco.mjtVisFlag.mjVIS_TRANSPARENT] = True
# tweak scales of contact visualization elements
model.vis.scale.contactwidth = 0.1
model.vis.scale.contactheight = 0.03
model.vis.scale.forcewidth = 0.05
model.vis.map.force = 0.3
# random initial rotational velocity:
mujoco.mj_resetData(model, data)
data.qvel[3:6] = 5*np.random.randn(3)
# simulate and render
for i in range(n_frames):
while data.time < i/120.0: #1/4x real time
mujoco.mj_step(model, data)
renderer.update_scene(data, "track", options)
frame = renderer.render()
frames.append(frame)
# show video
media.show_video(frames, fps=30)
3.2.6 接触力分析
让我们重新运行上述仿真(使用不同的随机初始条件),并绘制一些与接触力相关的值
n_steps = 499
# allocate
sim_time = np.zeros(n_steps)
ncon = np.zeros(n_steps)
force = np.zeros((n_steps,3))
velocity = np.zeros((n_steps, model.nv))
penetration = np.zeros(n_steps)
acceleration = np.zeros((n_steps, model.nv))
forcetorque = np.zeros(6)
# random initial rotational velocity:
mujoco.mj_resetData(model, data)
data.qvel[3:6] = 2*np.random.randn(3)
# simulate and save data
for i in range(n_steps):
mujoco.mj_step(model, data)
sim_time[i] = data.time
ncon[i] = data.ncon
velocity[i] = data.qvel[:]
acceleration[i] = data.qacc[:]
# iterate over active contacts, save force and distance
for j,c in enumerate(data.contact):
mujoco.mj_contactForce(model, data, j, forcetorque)
force[i] += forcetorque[0:3]
penetration[i] = min(penetration[i], c.dist)
# we could also do
# force[i] += data.qfrc_constraint[0:3]
# do you see why?
# plot
_, ax = plt.subplots(3, 2, sharex=True, figsize=(10, 10))
lines = ax[0,0].plot(sim_time, force)
ax[0,0].set_title('contact force')
ax[0,0].set_ylabel('Newton')
ax[0,0].legend(iter(lines), ('normal z', 'friction x', 'friction y'));
ax[1,0].plot(sim_time, acceleration)
ax[1,0].set_title('acceleration')
ax[1,0].set_ylabel('(meter,radian)/s/s')
ax[2,0].plot(sim_time, velocity)
ax[2,0].set_title('velocity')
ax[2,0].set_ylabel('(meter,radian)/s')
ax[2,0].set_xlabel('second')
ax[0,1].plot(sim_time, ncon)
ax[0,1].set_title('number of contacts')
ax[0,1].set_yticks(range(6))
ax[1,1].plot(sim_time, force[:,0])
ax[1,1].set_yscale('log')
ax[1,1].set_title('normal (z) force - log scale')
ax[1,1].set_ylabel('Newton')
z_gravity = -model.opt.gravity[2]
mg = model.body("box_and_sphere").mass[0] * z_gravity
mg_line = ax[1,1].plot(sim_time, np.ones(n_steps)*mg, label='m*g', linewidth=1)
ax[1,1].legend()
ax[2,1].plot(sim_time, 1000*penetration)
ax[2,1].set_title('penetration depth')
ax[2,1].set_ylabel('millimeter')
ax[2,1].set_xlabel('second')
plt.tight_layout()
3.2.7 摩擦力
让我们看看改变摩擦力值的效果
MJCF = """
"""
n_frames = 60
height = 300
width = 300
frames = []
# load
model = mujoco.MjModel.from_xml_string(MJCF)
data = mujoco.MjData(model)
renderer = mujoco.Renderer(model, height, width)
# simulate and render
mujoco.mj_resetData(model, data)
for i in range(n_frames):
while data.time < i/30.0:
mujoco.mj_step(model, data)
renderer.update_scene(data, "y")
frame = renderer.render()
frames.append(frame)
media.show_video(frames, fps=30)
四、肌腱、致动器和传感器
MJCF = """
"""
model = mujoco.MjModel.from_xml_string(MJCF)
renderer = mujoco.Renderer(model, 480, 480)
data = mujoco.MjData(model)
mujoco.mj_forward(model, data)
renderer.update_scene(data, "fixed")
media.show_image(renderer.render())
驱动蝙蝠和被动 "皮纳塔":
n_frames = 180
height = 240
width = 320
frames = []
fps = 60.0
times = []
sensordata = []
renderer = mujoco.Renderer(model, height, width)
# constant actuator signal
mujoco.mj_resetData(model, data)
data.ctrl = 20
# simulate and render
for i in range(n_frames):
while data.time < i/fps:
mujoco.mj_step(model, data)
times.append(data.time)
sensordata.append(data.sensor('accelerometer').data.copy())
renderer.update_scene(data, "fixed")
frame = renderer.render()
frames.append(frame)
media.show_video(frames, fps=fps)
让我们绘制加速度传感器测得的数值:
ax = plt.gca()
ax.plot(np.asarray(times), np.asarray(sensordata), label='timestep = {:2.2g}ms'.format(1000*dt))
# finalize plot
ax.set_title('Accelerometer values')
ax.set_ylabel('meter/second^2')
ax.set_xlabel('second')
ax.legend(frameon=True, loc='lower right');
plt.tight_layout()
请注意,从加速度计的测量结果中可以清楚地看到身体被球棒击中的瞬间。
五、高级渲染
与关节可视化一样,其他渲染选项也作为参数显示在渲染方法中。
让我们回到第一个模型:
xml = """
"""
model = mujoco.MjModel.from_xml_string(xml)
renderer = mujoco.Renderer(model)
data = mujoco.MjData(model)
mujoco.mj_forward(model, data)
renderer.update_scene(data)
media.show_image(renderer.render())
#@title Enable transparency and frame visualization {vertical-output: true}
scene_option.frame = mujoco.mjtFrame.mjFRAME_GEOM
scene_option.flags[mujoco.mjtVisFlag.mjVIS_TRANSPARENT] = True
renderer.update_scene(data, scene_option=scene_option)
frame = renderer.render()
media.show_image(frame)
#@title Depth rendering {vertical-output: true}
# update renderer to render depth
renderer.enable_depth_rendering()
# reset the scene
renderer.update_scene(data)
# depth is a float array, in meters.
depth = renderer.render()
# Shift nearest values to the origin.
depth -= depth.min()
# Scale by 2 mean distances of near rays.
depth /= 2*depth[depth <= 1].mean()
# Scale to [0, 255]
pixels = 255*np.clip(depth, 0, 1)
media.show_image(pixels.astype(np.uint8))
renderer.disable_depth_rendering()
#@title Segmentation rendering {vertical-output: true}
# update renderer to render segmentation
renderer.enable_segmentation_rendering()
# reset the scene
renderer.update_scene(data)
seg = renderer.render()
# Display the contents of the first channel, which contains object
# IDs. The second channel, seg[:, :, 1], contains object types.
geom_ids = seg[:, :, 0]
# Infinity is mapped to -1
geom_ids = geom_ids.astype(np.float64) + 1
# Scale to [0, 1]
geom_ids = geom_ids / geom_ids.max()
pixels = 255*geom_ids
media.show_image(pixels.astype(np.uint8))
renderer.disable_segmentation_rendering()
5.1 摄像机矩阵
有关摄像机矩阵的描述,请参阅维基百科上的 "摄像机矩阵 "一文。
def compute_camera_matrix(renderer, data):
"""Returns the 3x4 camera matrix."""
# If the camera is a 'free' camera, we get its position and orientation
# from the scene data structure. It is a stereo camera, so we average over
# the left and right channels. Note: we call `self.update()` in order to
# ensure that the contents of `scene.camera` are correct.
renderer.update_scene(data)
pos = np.mean([camera.pos for camera in renderer.scene.camera], axis=0)
z = -np.mean([camera.forward for camera in renderer.scene.camera], axis=0)
y = np.mean([camera.up for camera in renderer.scene.camera], axis=0)
rot = np.vstack((np.cross(y, z), y, z))
fov = model.vis.global_.fovy
# Translation matrix (4x4).
translation = np.eye(4)
translation[0:3, 3] = -pos
# Rotation matrix (4x4).
rotation = np.eye(4)
rotation[0:3, 0:3] = rot
# Focal transformation matrix (3x4).
focal_scaling = (1./np.tan(np.deg2rad(fov)/2)) * renderer.height / 2.0
focal = np.diag([-focal_scaling, focal_scaling, 1.0, 0])[0:3, :]
# Image matrix (3x3).
image = np.eye(3)
image[0, 2] = (renderer.width - 1) / 2.0
image[1, 2] = (renderer.height - 1) / 2.0
return image @ focal @ rotation @ translation
#@title Project from world to camera coordinates {vertical-output: true}
# reset the scene
renderer.update_scene(data)
# Get the world coordinates of the box corners
box_pos = data.geom_xpos[model.geom('red_box').id]
box_mat = data.geom_xmat[model.geom('red_box').id].reshape(3, 3)
box_size = model.geom_size[model.geom('red_box').id]
offsets = np.array([-1, 1]) * box_size[:, None]
xyz_local = np.stack(list(itertools.product(*offsets))).T
xyz_global = box_pos[:, None] + box_mat @ xyz_local
# Camera matrices multiply homogenous [x, y, z, 1] vectors.
corners_homogeneous = np.ones((4, xyz_global.shape[1]), dtype=float)
corners_homogeneous[:3, :] = xyz_global
# Get the camera matrix.
m = compute_camera_matrix(renderer, data)
# Project world coordinates into pixel space. See:
# https://en.wikipedia.org/wiki/3D_projection#Mathematical_formula
xs, ys, s = m @ corners_homogeneous
# x and y are in the pixel coordinate system.
x = xs / s
y = ys / s
# Render the camera view and overlay the projected corner coordinates.
pixels = renderer.render()
fig, ax = plt.subplots(1, 1)
ax.imshow(pixels)
ax.plot(x, y, '+', c='w')
ax.set_axis_off()
5.2 修改场景
让我们在 mjvScene 中添加一些任意几何体。
def get_geom_speed(model, data, geom_name):
"""Returns the speed of a geom."""
geom_vel = np.zeros(6)
geom_type = mujoco.mjtObj.mjOBJ_GEOM
geom_id = data.geom(geom_name).id
mujoco.mj_objectVelocity(model, data, geom_type, geom_id, geom_vel, 0)
return np.linalg.norm(geom_vel)
def add_visual_capsule(scene, point1, point2, radius, rgba):
"""Adds one capsule to an mjvScene."""
if scene.ngeom >= scene.maxgeom:
return
scene.ngeom += 1 # increment ngeom
# initialise a new capsule, add it to the scene using mjv_makeConnector
mujoco.mjv_initGeom(scene.geoms[scene.ngeom-1],
mujoco.mjtGeom.mjGEOM_CAPSULE, np.zeros(3),
np.zeros(3), np.zeros(9), rgba.astype(np.float32))
mujoco.mjv_makeConnector(scene.geoms[scene.ngeom-1],
mujoco.mjtGeom.mjGEOM_CAPSULE, radius,
point1[0], point1[1], point1[2],
point2[0], point2[1], point2[2])
# traces of time, position and speed
times = []
positions = []
speeds = []
offset = model.jnt_axis[0]/16 # offset along the joint axis
def modify_scene(scn):
"""Draw position trace, speed modifies width and colors."""
if len(positions) > 1:
for i in range(len(positions)-1):
rgba=np.array((np.clip(speeds[i]/10, 0, 1),
np.clip(1-speeds[i]/10, 0, 1),
.5, 1.))
radius=.003*(1+speeds[i])
point1 = positions[i] + offset*times[i]
point2 = positions[i+1] + offset*times[i+1]
add_visual_capsule(scn, point1, point2, radius, rgba)
duration = 6 # (seconds)
framerate = 30 # (Hz)
# Simulate and display video.
frames = []
# Reset state and time.
mujoco.mj_resetData(model, data)
mujoco.mj_forward(model, data)
while data.time < duration:
# append data to the traces
positions.append(data.geom_xpos[data.geom("green_sphere").id].copy())
times.append(data.time)
speeds.append(get_geom_speed(model, data, "green_sphere"))
mujoco.mj_step(model, data)
if len(frames) < data.time * framerate:
renderer.update_scene(data)
modify_scene(renderer.scene)
pixels = renderer.render()
frames.append(pixels)
media.show_video(frames, fps=framerate)
5.3 摄像机控制
摄像机可以动态控制,以实现电影效果。运行下面的三个单元格,就能看到静态摄像机和移动摄像机渲染效果的不同。
摄像机控制代码在两个轨迹之间平滑转换,一个轨迹围绕一个固定点运行,另一个轨迹跟踪一个移动物体。代码中的参数值是通过快速迭代低分辨率视频获得的。
#@title Load the "dominos" model
dominos_xml = """
"""
model = mujoco.MjModel.from_xml_string(dominos_xml)
data = mujoco.MjData(model)
renderer = mujoco.Renderer(model, height=1024, width=1440)
#@title Render from fixed camera
duration = 2.5 # (seconds)
framerate = 60 # (Hz)
# Simulate and display video.
frames = []
mujoco.mj_resetData(model, data) # Reset state and time.
while data.time < duration:
mujoco.mj_step(model, data)
if len(frames) < data.time * framerate:
renderer.update_scene(data, camera='top')
pixels = renderer.render()
frames.append(pixels)
media.show_video(frames, fps=framerate)
#@title Render from moving camera
duration = 3 # (seconds)
# find time when box is thrown (speed > 2cm/s)
throw_time = 0.0
mujoco.mj_resetData(model, data)
while data.time < duration and not throw_time:
mujoco.mj_step(model, data)
box_speed = np.linalg.norm(data.joint('box').qvel[:3])
if box_speed > 0.02:
throw_time = data.time
assert throw_time > 0
def mix(time, t0=0.0, width=1.0):
"""Sigmoidal mixing function."""
t = (time - t0) / width
s = 1 / (1 + np.exp(-t))
return 1 - s, s
def unit_cos(t):
"""Unit cosine sigmoid from (0,0) to (1,1)."""
return 0.5 - np.cos(np.pi*np.clip(t, 0, 1))/2
def orbit_motion(t):
"""Return orbit trajectory."""
distance = 0.9
azimuth = 140 + 100 * unit_cos(t)
elevation = -30
lookat = data.geom('floor').xpos.copy()
return distance, azimuth, elevation, lookat
def track_motion():
"""Return box-track trajectory."""
distance = 0.08
azimuth = 280
elevation = -10
lookat = data.geom('box').xpos.copy()
return distance, azimuth, elevation, lookat
def cam_motion():
"""Return sigmoidally-mixed {orbit, box-track} trajectory."""
d0, a0, e0, l0 = orbit_motion(data.time / throw_time)
d1, a1, e1, l1 = track_motion()
mix_time = 0.3
w0, w1 = mix(data.time, throw_time, mix_time)
return w0*d0+w1*d1, w0*a0+w1*a1, w0*e0+w1*e1, w0*l0+w1*l1
# Make a camera.
cam = mujoco.MjvCamera()
mujoco.mjv_defaultCamera(cam)
# Simulate and display video.
framerate = 60 # (Hz)
slowdown = 4 # 4x slow-down
mujoco.mj_resetData(model, data)
frames = []
while data.time < duration:
mujoco.mj_step(model, data)
if len(frames) < data.time * framerate * slowdown:
cam.distance, cam.azimuth, cam.elevation, cam.lookat = cam_motion()
renderer.update_scene(data, cam)
pixels = renderer.render()
frames.append(pixels)
media.show_video(frames, fps=framerate)