Skip to content

Commit

Permalink
docs
Browse files Browse the repository at this point in the history
  • Loading branch information
StoneT2000 committed Mar 3, 2024
1 parent 1cf69c5 commit d50fc86
Show file tree
Hide file tree
Showing 4 changed files with 24 additions and 7 deletions.
3 changes: 2 additions & 1 deletion docs/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,5 @@ sphinx-autodoc-typehints
sphinx_copybutton
# Markdown parser
myst-parser
sphinx-subfigure
sphinx-subfigure
sphinxcontrib-video
19 changes: 15 additions & 4 deletions docs/source/data_collection/teleoperation.md
Original file line number Diff line number Diff line change
@@ -1,25 +1,36 @@
# Teleoperation

There are a number of teleoperation systems provided by ManiSkill that help collect demonstration data in environments. Each system is detailed below with how to use it and a demo video. We also detail what hardware requirements are necessary, how usable the system is, and the limitations of the system
There are a number of teleoperation systems provided by ManiSkill that help collect demonstration data in environments. Each system is detailed below with how to use it and a demo video. We also detail what hardware requirements are necessary, how usable the system is, and the limitations of the system.

At the moment there is the intuitive click+drag system, systems using e.g. space mouse, a VR headset will come soon.

## Click+Drag System

Requirements: Display, mouse, keyboard

Usability: Extremely easy to generate fine-grained demonstrations

Limitations: Limited to only solving less dynamical tasks like picking up a cube. Tasks like throwing a cube would not be possible.
Limitations: Limited to only solving less dynamical tasks with two-finger grippers like picking up a cube. Tasks like throwing a cube would not be possible.

To start the system run
```bash
python -m mani_skill2.examples.interactive_teleop.py -e "PickCube-v1"
```

You can then drag the end-effector of the robot arm around to any position and rotation and press "n" on the keyboard to generate a trajectory to that place (done via motion planning). Each time the system will also print the current info about whether the task is solved or not.

You can press "g" to toggle the gripper to be closing or opening.

To finish collecting one trajectory and to move on to another, simply press "c" which will save the last trajectory.

To stop data collection press "q" to quit.

You can always press "h" to bring up a help menu describing the keyboard commands

## Space Mouse
<!-- TODO (stao): discuss checkpointing method, help button -->

<!-- ## Space Mouse -->
<!--
## Meta Quest 3
Requirements: Meta Quest 3
Requirements: Meta Quest 3 -->
5 changes: 4 additions & 1 deletion mani_skill2/examples/interactive_teleop.py
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,10 @@ def solve(env: BaseEnv, debug=False, vis=False):

env.render_human()
execute_current_pose = False
if viewer.window.key_press("k"):
if viewer.window.key_press("h"):
# TODO (stao): print help menu
pass
elif viewer.window.key_press("k"):
print("Saving checkpoint")
last_checkpoint_state = env.get_state()
elif viewer.window.key_press("l"):
Expand Down
4 changes: 3 additions & 1 deletion setup.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
from mani_skill2 import __version__
from setuptools import find_packages, setup

from mani_skill2 import __version__

long_description = """ManiSkill2 is a unified benchmark for learning generalizable robotic manipulation skills powered by [SAPIEN](https://sapien.ucsd.edu/). **It features 20 out-of-box task families with 2000+ diverse object models and 4M+ demonstration frames**. Moreover, it empowers fast visual input learning algorithms so that **a CNN-based policy can collect samples at about 2000 FPS with 1 GPU and 16 processes on a workstation**. The benchmark can be used to study a wide range of algorithms: 2D & 3D vision-based reinforcement learning, imitation learning, sense-plan-act, etc.
Please refer our [documentation](https://haosulab.github.io/ManiSkill2) to learn more information."""
Expand Down Expand Up @@ -64,6 +65,7 @@
# Markdown parser
"myst-parser",
"sphinx-subfigure",
"sphinxcontrib-video",
],
},
)

0 comments on commit d50fc86

Please sign in to comment.