v0.4.0: New vectorized environments, improved renderer, hands-on tutorials, pip-installable, better documentations and other enhancements
ManiSkill2 v0.4.0 Release Notes
ManiSkill2 v0.4.0 introduces many new features and makes it easier to start a journey of robot learning. Here are the highlights:
- New vectorized environments supported by the RPC-based render system (
sapien.RenderServer
andsapien.RenderClient
). - The renderer is significantly improved.
sapien.VulkanRenderer
andsapien.KuafuRenderer
are merged into a unified renderersapien.SapienRenderer
. - Hands-on tutorials are provided for new users. Most of them can run on Google Colab.
mani_skill2
is a pip-installable package now!- Documentation is improved. The descriptions of environments are improved and their thumbnails are added.
- We experimentally support adding visual backgrounds and enabling realistic stereo depth cameras.
- Customization of environments (configuring cameras) is easier now!
Given many new features, we refactor ManiSkill2, which leads to many changes between v0.4.0 and v0.3.0. The instructions to migrate are presented below.
New Features
Installation
Installation becomes easier: pip install mani-skill2
.
Note that to fully uninstall
mani_skill2
, you might need manually remove the generated cache files.
We include some examples in the package.
# Example with random actions. Can be used to test the installation
python -m mani_skill2.examples.demo_random_action
# Interactive play
python -m mani_skill2.examples.demo_manual_control -e PickCube-v0
pip_install.mp4
Vectorized Environments
We provide an implementation of vectorized environments (for rigid-body environments) powered by the SAPIEN RPC-based render server-client system.
from mani_skill2.vector import VecEnv, make
env: VecEnv = make("PickCube-v0", num_envs=4)
Please see mani_skill2.examples.demo_vec_env
for an example: python -m mani_skill2.examples.demo_vec_env -e PickCube-v0 -n 4
.
We provide examples to use our VecEnv
with Stable-baselines3 at https://github.com/haosulab/ManiSkill2/blob/main/examples/tutorials/2_reinforcement_learning.ipynb and https://github.com/haosulab/ManiSkill2/tree/main/examples/tutorials/reinforcement-learning
Improved Renderer
It is easier to enable ray tracing:
# Enable ray tracing by changing shaders
env = gym.make("PickCube-v0", shader_dir="rt")
v0.3.0 experimentally supports ray tracing by
KuafuRenderer
. v0.4.0 usesSapienRenderer
instead to provide a more seamless experience. Ray tracing is still not supported for soft-body environments currently.
Colab Tutorials
Camera Configurations
It is easier to change camera configurations in v0.4.0:
# Change camera resolutions
env = gym.make(
"PickCube-v0",
# only change "base_camera" and keep other cameras for observations unchanged
camera_cfgs=dict(base_camera=dict(width=320, height=240)),
# change for all cameras for visualization
render_camera_cfgs=dict(width=640, height=480),
)
To include GT segmentation masks for all cameras in observations, you can set add_segmentation=True
in camera_cfgs
to initialize an environment.
# Add segmentation masks to observations (equivalent to adding Segmentation texture for each camera)
env = gym.make("PickCube-v0", camera_cfgs=dict(add_segmentation=True))
v0.3.0 uses
gym.make(..., enable_gt_seg=True)
to enable GT segmentation masks (visual_seg
andactor_seg
). v0.4.0 usesenv = gym.make(..., camera_cfgs=dict(add_segmentation=True))
. Besides, there will beSegmentation
in observations instead, whereSegmentation[..., 0:1] == visual_seg
andSegmentation[..., 1:2] == actor_seg
.
More examples can be found at https://github.com/haosulab/ManiSkill2/blob/main/examples/tutorials/customize_environments.ipynb
Visual Background
We experimentally support adding visual backgrounds.
# Download the background asset first: python -m mani_skill2.utils.download_asset minimal_bedroom
env = gym.make("PickCube-v0", bg_name="minimal_bedroom")
Stereo Depth Camera
We experimentally support realistic stereo depth cameras.
env = gym.make(
"PickCube-v0",
obs_mode="rgbd",
shader_dir="rt",
camera_cfgs={"use_stereo_depth": True, "height": 512, "width": 512},
)
Breaking Changes
Assets
mani_skill2
is pip-installable. The basic assets (the robot description of the Panda arm, PartNet-mobility metadata, essential assets for soft-body environments) are located at mani_skill2/assets
, which are packed into the pip wheel. Task-specific assets need to be downloaded. The extra assets are downloaded to ./data
by default.
- Improve the script to download assets:
python -m mani_skill2.utils.download_asset ${ASSET_UID/ENV_ID}
. The positional argument can be a UID of the asset, an environment ID, or "all".
mani_skill2.utils.download
(v0.3.0) is renamed tomani_skill2.utils.download_asset
(v0.4.0).
# Download YCB object models
python -m mani_skill2.utils.download_asset ycb
# Download the required assets for PickSingleYCB-v0, which are just YCB object models
python -m mani_skill2.utils.download_asset PickSingleYCB-v0
- When
mani_skill2
is imported, it uses the environment variableMS2_ASSET_DIR
to decide where assets are stored, which is set to./data
if not specified. It also takes effect for downloading assets.
Demonstrations
We add a script to download demonstrations: python -m mani_skill2.utils.download_demo ${ENV_ID} -o ${DEMO_DIR}
.
There are some minor changes to the file structure, but no updates to the data itself.
Observations
The observation modes that include robot segmentation masks are renamed to pointcloud+robot_seg
and rgbd+robot_seg
from pointcloud_robot_seg
and rgbd_robot_seg
.
v0.3.0 uses
xxx_robot_seg
while v0.4.0 usesxxx+robot_seg
. However, the concrete implementation only checks the keywordrobot_seg
. Thus, the previous codes will not be broken by this change.
For RGB-D observations, we move all camera parameters from the key image
to a new key camera_param
. Please see https://haosulab.github.io/ManiSkill2/concepts/observation.html#image for more details.
In v0.3.0, camera parameters are within
obs["image"]
. In v0.4.0, there is a separate keyobs["camera_param"]
for camera parameters. It will make users easier to discard camera parameters if they do not need them.
Fixes
- Fix undefined behavior due to
solver_velocity_iterations=0
- Fix paths to download assets of "PickClutterYCB-v0", "OpenCabinetDrawer-v1", "OpenCabinetDoor-v1"
Pull Requests
- track order in h5py files to make stored 'obs' key data be consistent with order in env observations by @StoneT2000 in https://github.com/haosulab/ManiSkill2/pull/48
- Add python api to download demonstrations and fix gdown bug for large file downloads by @StoneT2000 in https://github.com/haosulab/ManiSkill2/pull/45
- README download path "rigid/soft_body_envs" -> "rigid/soft_body" by @xuanlinli17 in https://github.com/haosulab/ManiSkill2/pull/55
- fix PickClutter bug where obj_start_pos is not an np array by @xuanlinli17 in https://github.com/haosulab/ManiSkill2/pull/58
- v0.4.0: SapienRenderer, vectorized environments, pip wheel and other new features by @Jiayuan-Gu in https://github.com/haosulab/ManiSkill2/pull/57
- gpu runtime specification. by @StoneT2000 in https://github.com/haosulab/ManiSkill2/pull/60
- 0.4.0 patch by @Jiayuan-Gu in https://github.com/haosulab/ManiSkill2/pull/59
Full Changelog: haosulab/ManiSkill2@v0.3.0...v0.4.0