[Question] How to get force/torque for task space force control? #1092
Replies: 13 comments
-
Hello @okada-masashi, I did the following to get the applied torque values at different joints of the arm, including the end-effector. As SingleArmManipulator is derived from the RobotBase class, which has an For example: robot = SingleArmManipulator(cfg=robot_cfg)
articulation_view = robot.articulations
applied_joint_efforts = articulation_view.get_applied_joint_efforts() # can pass the indices as per requirement Note: you might get zero values in return, if you have used the Implicit Actuator Model while using the task space controller : differential_inverse_kinematics |
Beta Was this translation helpful? Give feedback.
-
Hi, If you want to obtain the joint torque values with implicit actuator model (i.e. the PhysX PD controller), there is a flag called If you need the force sensing on the link of the robot, you can use the contact forces using the RigidContactView class. This quantity is what the prior work, Factory, used in Isaac Gym for task-space control. However, that only gives the normal contact force and not really the torque acting on the link. We have made a feature request to the PhysX team to provide a 6-DoF wrench sensing on the links. Will have to check if that is going to be in the coming release of Isaac Sim or not. |
Beta Was this translation helpful? Give feedback.
-
Hello @Mayankm96 Thanks for pointing out the way to obtain the joint torque values with the implicit actuator model. I tried enabling Edited code script that I have tried: """Launch Isaac Sim Simulator first."""
import argparse
from omni.isaac.kit import SimulationApp
# add argparse arguments
parser = argparse.ArgumentParser("Welcome to Orbit: Omniverse Robotics Environments!")
parser.add_argument("--headless", action="store_true", default=False, help="Force display off at all times.")
parser.add_argument("--robot", type=str, default="franka_panda", help="Name of the robot.")
parser.add_argument("--num_envs", type=int, default=128, help="Number of environments to spawn.")
args_cli = parser.parse_args()
# launch omniverse app
config = {"headless": args_cli.headless}
simulation_app = SimulationApp(config)
"""Rest everything follows."""
import torch
import omni.isaac.core.utils.prims as prim_utils
from omni.isaac.cloner import GridCloner
from omni.isaac.core.simulation_context import SimulationContext
from omni.isaac.core.utils.carb import set_carb_setting
from omni.isaac.core.utils.viewports import set_camera_view
import omni.isaac.orbit.utils.kit as kit_utils
from omni.isaac.orbit.controllers.differential_inverse_kinematics import (
DifferentialInverseKinematics,
DifferentialInverseKinematicsCfg,
)
from omni.isaac.orbit.markers import StaticMarker
from omni.isaac.orbit.robots.config.franka import FRANKA_PANDA_ARM_WITH_PANDA_HAND_CFG
from omni.isaac.orbit.robots.config.universal_robots import UR10_CFG
from omni.isaac.orbit.robots.single_arm import SingleArmManipulator
from omni.isaac.orbit.utils.assets import ISAAC_NUCLEUS_DIR
"""
Main
"""
def main():
"""Spawns a single-arm manipulator and applies commands through inverse kinematics control."""
# Load kit helper
sim = SimulationContext(physics_dt=0.01, rendering_dt=0.01, backend="torch", device="cuda:0")
# Set main camera
set_camera_view([2.5, 2.5, 2.5], [0.0, 0.0, 0.0])
# Enable GPU pipeline and flatcache
if sim.get_physics_context().use_gpu_pipeline:
sim.get_physics_context().enable_flatcache(True)
# Enable hydra scene-graph instancing
set_carb_setting(sim._settings, "/persistent/omnihydra/useSceneGraphInstancing", True)
# Create interface to clone the scene
cloner = GridCloner(spacing=2.0)
cloner.define_base_env("/World/envs")
# Everything under the namespace "/World/envs/env_0" will be cloned
prim_utils.define_prim("/World/envs/env_0")
# Spawn things into stage
# Markers
ee_marker = StaticMarker("/Visuals/ee_current", count=args_cli.num_envs, scale=(0.1, 0.1, 0.1))
goal_marker = StaticMarker("/Visuals/ee_goal", count=args_cli.num_envs, scale=(0.1, 0.1, 0.1))
# Ground-plane
kit_utils.create_ground_plane("/World/defaultGroundPlane", z_position=-1.05)
# Lights-1
prim_utils.create_prim(
"/World/Light/GreySphere",
"SphereLight",
translation=(4.5, 3.5, 10.0),
attributes={"radius": 2.5, "intensity": 600.0, "color": (0.75, 0.75, 0.75)},
)
# Lights-2
prim_utils.create_prim(
"/World/Light/WhiteSphere",
"SphereLight",
translation=(-4.5, 3.5, 10.0),
attributes={"radius": 2.5, "intensity": 600.0, "color": (1.0, 1.0, 1.0)},
)
# -- Table
table_usd_path = f"{ISAAC_NUCLEUS_DIR}/Props/Mounts/SeattleLabTable/table_instanceable.usd"
prim_utils.create_prim("/World/envs/env_0/Table", usd_path=table_usd_path)
# -- Robot
# resolve robot config from command-line arguments
if args_cli.robot == "franka_panda":
robot_cfg = FRANKA_PANDA_ARM_WITH_PANDA_HAND_CFG
elif args_cli.robot == "ur10":
robot_cfg = UR10_CFG
else:
raise ValueError(f"Robot {args_cli.robot} is not supported. Valid: franka_panda, ur10")
# configure robot settings to use IK controller
robot_cfg.data_info.enable_jacobian = True
robot_cfg.rigid_props.disable_gravity = True
# spawn robot
robot = SingleArmManipulator(cfg=robot_cfg)
robot.spawn("/World/envs/env_0/Robot", translation=(0.0, 0.0, 0.0))
# Clone the scene
num_envs = args_cli.num_envs
envs_prim_paths = cloner.generate_paths("/World/envs/env", num_paths=num_envs)
envs_positions = cloner.clone(source_prim_path="/World/envs/env_0", prim_paths=envs_prim_paths)
# convert environment positions to torch tensor
envs_positions = torch.tensor(envs_positions, dtype=torch.float, device=sim.device)
# filter collisions within each environment instance
physics_scene_path = sim.get_physics_context().prim_path
cloner.filter_collisions(
physics_scene_path, "/World/collisions", envs_prim_paths, global_paths=["/World/defaultGroundPlane"]
)
# Create controller
# the controller takes as command type: {position/pose}_{abs/rel}
ik_control_cfg = DifferentialInverseKinematicsCfg(
command_type="pose_abs",
ik_method="dls",
position_offset=robot.cfg.ee_info.pos_offset,
rotation_offset=robot.cfg.ee_info.rot_offset,
)
ik_controller = DifferentialInverseKinematics(ik_control_cfg, num_envs, sim.device)
# Play the simulator
sim.reset()
# Acquire handles
# Initialize handles
robot.initialize("/World/envs/env_.*/Robot")
ik_controller.initialize()
# Reset states
robot.reset_buffers()
ik_controller.reset_idx()
#### changes
# get the articulation view instance from robot
articulation_view = robot.articulations
articulation_view.enable_dof_force_sensors = True # enable_dof_force_sensors
####
# Now we are ready!
print("[INFO]: Setup complete...")
# Create buffers to store actions
ik_commands = torch.zeros(robot.count, ik_controller.num_actions, device=robot.device)
robot_actions = torch.ones(robot.count, robot.num_actions, device=robot.device)
# Set end effector goals
# Define goals for the arm
ee_goals = [
[0.5, 0.5, 0.7, 0.707, 0, 0.707, 0],
[0.5, -0.4, 0.6, 0.707, 0.707, 0.0, 0.0],
[0.5, 0, 0.5, 0.0, 1.0, 0.0, 0.0],
]
ee_goals = torch.tensor(ee_goals, device=sim.device)
# Track the given command
current_goal_idx = 0
ik_commands[:] = ee_goals[current_goal_idx]
# Define simulation stepping
sim_dt = sim.get_physics_dt()
# episode counter
sim_time = 0.0
count = 0
# Note: We need to update buffers before the first step for the controller.
robot.update_buffers(sim_dt)
# Simulate physics
# Simulate physics
while simulation_app.is_running():
# If simulation is stopped, then exit.
if sim.is_stopped():
break
# If simulation is paused, then skip.
if not sim.is_playing():
sim.step(render=not args_cli.headless)
continue
# reset
if count % 150 == 0:
# reset time
count = 0
sim_time = 0.0
# reset dof state
dof_pos, dof_vel = robot.get_default_dof_state()
robot.set_dof_state(dof_pos, dof_vel)
robot.reset_buffers()
# reset controller
ik_controller.reset_idx()
# reset actions
ik_commands[:] = ee_goals[current_goal_idx]
# change goal
current_goal_idx = (current_goal_idx + 1) % len(ee_goals)
# set the controller commands
ik_controller.set_command(ik_commands)
# compute the joint commands
robot_actions[:, : robot.arm_num_dof] = ik_controller.compute(
robot.data.ee_state_w[:, 0:3] - envs_positions,
robot.data.ee_state_w[:, 3:7],
robot.data.ee_jacobian,
robot.data.arm_dof_pos,
)
# in some cases the zero action correspond to offset in actuators
# so we need to subtract these over here so that they can be added later on
arm_command_offset = robot.data.actuator_pos_offset[:, : robot.arm_num_dof]
# offset actuator command with position offsets
# note: valid only when doing position control of the robot
robot_actions[:, : robot.arm_num_dof] -= arm_command_offset
# apply actions
robot.apply_action(robot_actions)
# perform step
sim.step(render=not args_cli.headless)
#### changes
print(f"Applied Joint Torques: {articulation_view.get_applied_joint_efforts()}")
####
# update sim-time
sim_time += sim_dt
count += 1
# note: to deal with timeline events such as stopping, we need to check if the simulation is playing
if sim.is_playing():
# update buffers
robot.update_buffers(sim_dt)
# update marker positions
ee_marker.set_world_poses(robot.data.ee_state_w[:, 0:3], robot.data.ee_state_w[:, 3:7])
goal_marker.set_world_poses(ik_commands[:, 0:3] + envs_positions, ik_commands[:, 3:7])
if __name__ == "__main__":
# Run IK example with Manipulator
main()
# Close the simulator
simulation_app.close() am i doing something wrong ? |
Beta Was this translation helpful? Give feedback.
-
HI @kumar-sanjeeev , Looking at the code. It seems that the Isaac Sim wrapper Can you try this instead for getting the joint efforts: # directly calls the underlying physx view to get the quantities
articulation_view._physics_view.get_force_sensor_forces() |
Beta Was this translation helpful? Give feedback.
-
Hi @Mayankm96, I tried directly calling the physx view to get the joint efforts, but I am getting the following errors while running the code. Error: File "sanju_ws/playing_with_implicit_actuator.py", line 230, in <module>
main()
File "sanju_ws/playing_with_implicit_actuator.py", line 138, in main
articulation_view._physics_view.get_force_sensor_forces()
File "/home/ksu1rng/.local/share/ov/pkg/isaac_sim-2022.2.1/kit/extsPhysics/omni.physics.tensors-104.2.4-5.1/omni/physics/tensors/impl/api.py", line 512, in get_force_sensor_forces
raise Exception("Articulation view contains no force sensors")
Exception: Articulation view contains no force sensors While looking for the root cause of this issue, I found the Force Sensor documentation, where a similar approach was suggested. However, it was also mentioned to do this step: Add > Physics > Articulation Force Sensor before calling |
Beta Was this translation helpful? Give feedback.
-
Can you pass the flag to enable the sensor?
|
Beta Was this translation helpful? Give feedback.
-
I tried and changed my local Traceback (most recent call last):
File "sanju_ws/playing_with_implicit_actuator.py", line 231, in <module>
main()
File "sanju_ws/playing_with_implicit_actuator.py", line 215, in main
print(f"Applied Joint Torques: {articulation_view._physics_view.get_force_sensor_forces()}")
File "/home/ksu1rng/.local/share/ov/pkg/isaac_sim-2022.2.1/kit/extsPhysics/omni.physics.tensors-104.2.4-5.1/omni/physics/tensors/impl/api.py", line 512, in get_force_sensor_forces
raise Exception("Articulation view contains no force sensors") Following Script used to check functionality: """Launch Isaac Sim Simulator first."""
import argparse
from omni.isaac.kit import SimulationApp
# add argparse arguments
parser = argparse.ArgumentParser("Welcome to Orbit: Omniverse Robotics Environments!")
parser.add_argument("--headless", action="store_true", default=False, help="Force display off at all times.")
parser.add_argument("--robot", type=str, default="franka_panda", help="Name of the robot.")
parser.add_argument("--num_envs", type=int, default=128, help="Number of environments to spawn.")
args_cli = parser.parse_args()
# launch omniverse app
config = {"headless": args_cli.headless}
simulation_app = SimulationApp(config)
"""Rest everything follows."""
import torch
import omni.isaac.core.utils.prims as prim_utils
from omni.isaac.cloner import GridCloner
from omni.isaac.core.simulation_context import SimulationContext
from omni.isaac.core.utils.carb import set_carb_setting
from omni.isaac.core.utils.viewports import set_camera_view
import omni.isaac.orbit.utils.kit as kit_utils
from omni.isaac.orbit.controllers.differential_inverse_kinematics import (
DifferentialInverseKinematics,
DifferentialInverseKinematicsCfg,
)
from omni.isaac.orbit.markers import StaticMarker
from omni.isaac.orbit.robots.config.franka import FRANKA_PANDA_ARM_WITH_PANDA_HAND_CFG
from omni.isaac.orbit.robots.config.universal_robots import UR10_CFG
from omni.isaac.orbit.robots.single_arm import SingleArmManipulator
from omni.isaac.orbit.utils.assets import ISAAC_NUCLEUS_DIR
"""
Main
"""
def main():
"""Spawns a single-arm manipulator and applies commands through inverse kinematics control."""
# Load kit helper
sim = SimulationContext(physics_dt=0.01, rendering_dt=0.01, backend="torch", device="cuda:0")
# Set main camera
set_camera_view([2.5, 2.5, 2.5], [0.0, 0.0, 0.0])
# Enable GPU pipeline and flatcache
if sim.get_physics_context().use_gpu_pipeline:
sim.get_physics_context().enable_flatcache(True)
# Enable hydra scene-graph instancing
set_carb_setting(sim._settings, "/persistent/omnihydra/useSceneGraphInstancing", True)
# Create interface to clone the scene
cloner = GridCloner(spacing=2.0)
cloner.define_base_env("/World/envs")
# Everything under the namespace "/World/envs/env_0" will be cloned
prim_utils.define_prim("/World/envs/env_0")
# Spawn things into stage
# Markers
ee_marker = StaticMarker("/Visuals/ee_current", count=args_cli.num_envs, scale=(0.1, 0.1, 0.1))
goal_marker = StaticMarker("/Visuals/ee_goal", count=args_cli.num_envs, scale=(0.1, 0.1, 0.1))
# Ground-plane
kit_utils.create_ground_plane("/World/defaultGroundPlane", z_position=-1.05)
# Lights-1
prim_utils.create_prim(
"/World/Light/GreySphere",
"SphereLight",
translation=(4.5, 3.5, 10.0),
attributes={"radius": 2.5, "intensity": 600.0, "color": (0.75, 0.75, 0.75)},
)
# Lights-2
prim_utils.create_prim(
"/World/Light/WhiteSphere",
"SphereLight",
translation=(-4.5, 3.5, 10.0),
attributes={"radius": 2.5, "intensity": 600.0, "color": (1.0, 1.0, 1.0)},
)
# -- Table
table_usd_path = f"{ISAAC_NUCLEUS_DIR}/Props/Mounts/SeattleLabTable/table_instanceable.usd"
prim_utils.create_prim("/World/envs/env_0/Table", usd_path=table_usd_path)
# -- Robot
# resolve robot config from command-line arguments
if args_cli.robot == "franka_panda":
robot_cfg = FRANKA_PANDA_ARM_WITH_PANDA_HAND_CFG
elif args_cli.robot == "ur10":
robot_cfg = UR10_CFG
else:
raise ValueError(f"Robot {args_cli.robot} is not supported. Valid: franka_panda, ur10")
# configure robot settings to use IK controller
robot_cfg.data_info.enable_jacobian = True
robot_cfg.rigid_props.disable_gravity = True
# spawn robot
robot = SingleArmManipulator(cfg=robot_cfg)
robot.spawn("/World/envs/env_0/Robot", translation=(0.0, 0.0, 0.0))
# Clone the scene
num_envs = args_cli.num_envs
envs_prim_paths = cloner.generate_paths("/World/envs/env", num_paths=num_envs)
envs_positions = cloner.clone(source_prim_path="/World/envs/env_0", prim_paths=envs_prim_paths)
# convert environment positions to torch tensor
envs_positions = torch.tensor(envs_positions, dtype=torch.float, device=sim.device)
# filter collisions within each environment instance
physics_scene_path = sim.get_physics_context().prim_path
cloner.filter_collisions(
physics_scene_path, "/World/collisions", envs_prim_paths, global_paths=["/World/defaultGroundPlane"]
)
# Create controller
# the controller takes as command type: {position/pose}_{abs/rel}
ik_control_cfg = DifferentialInverseKinematicsCfg(
command_type="pose_abs",
ik_method="dls",
position_offset=robot.cfg.ee_info.pos_offset,
rotation_offset=robot.cfg.ee_info.rot_offset,
)
ik_controller = DifferentialInverseKinematics(ik_control_cfg, num_envs, sim.device)
# Play the simulator
sim.reset()
# Acquire handles
# Initialize handles
robot.initialize("/World/envs/env_.*/Robot")
ik_controller.initialize()
# Reset states
robot.reset_buffers()
ik_controller.reset_idx()
#### changes
# get the articulation view instance from robot
articulation_view = robot.articulations
####
# Now we are ready!
print("[INFO]: Setup complete...")
# Create buffers to store actions
ik_commands = torch.zeros(robot.count, ik_controller.num_actions, device=robot.device)
robot_actions = torch.ones(robot.count, robot.num_actions, device=robot.device)
# Set end effector goals
# Define goals for the arm
ee_goals = [
[0.5, 0.5, 0.7, 0.707, 0, 0.707, 0],
[0.5, -0.4, 0.6, 0.707, 0.707, 0.0, 0.0],
[0.5, 0, 0.5, 0.0, 1.0, 0.0, 0.0],
]
ee_goals = torch.tensor(ee_goals, device=sim.device)
# Track the given command
current_goal_idx = 0
ik_commands[:] = ee_goals[current_goal_idx]
# Define simulation stepping
sim_dt = sim.get_physics_dt()
# episode counter
sim_time = 0.0
count = 0
# Note: We need to update buffers before the first step for the controller.
robot.update_buffers(sim_dt)
# Simulate physics
# Simulate physics
while simulation_app.is_running():
# If simulation is stopped, then exit.
if sim.is_stopped():
break
# If simulation is paused, then skip.
if not sim.is_playing():
sim.step(render=not args_cli.headless)
continue
# reset
if count % 150 == 0:
# reset time
count = 0
sim_time = 0.0
# reset dof state
dof_pos, dof_vel = robot.get_default_dof_state()
robot.set_dof_state(dof_pos, dof_vel)
robot.reset_buffers()
# reset controller
ik_controller.reset_idx()
# reset actions
ik_commands[:] = ee_goals[current_goal_idx]
# change goal
current_goal_idx = (current_goal_idx + 1) % len(ee_goals)
# set the controller commands
ik_controller.set_command(ik_commands)
# compute the joint commands
robot_actions[:, : robot.arm_num_dof] = ik_controller.compute(
robot.data.ee_state_w[:, 0:3] - envs_positions,
robot.data.ee_state_w[:, 3:7],
robot.data.ee_jacobian,
robot.data.arm_dof_pos,
)
# in some cases the zero action correspond to offset in actuators
# so we need to subtract these over here so that they can be added later on
arm_command_offset = robot.data.actuator_pos_offset[:, : robot.arm_num_dof]
# offset actuator command with position offsets
# note: valid only when doing position control of the robot
robot_actions[:, : robot.arm_num_dof] -= arm_command_offset
# apply actions
robot.apply_action(robot_actions)
# perform step
sim.step(render=not args_cli.headless)
#### changes
print(f"Applied Joint Torques: {articulation_view.get_applied_joint_efforts()}") # giving the Torque tensors with zero values
print(f"Applied Joint Torques: {articulation_view._physics_view.get_force_sensor_forces()}") # throwing the error
####
# update sim-time
sim_time += sim_dt
count += 1
# note: to deal with timeline events such as stopping, we need to check if the simulation is playing
if sim.is_playing():
# update buffers
robot.update_buffers(sim_dt)
# update marker positions
ee_marker.set_world_poses(robot.data.ee_state_w[:, 0:3], robot.data.ee_state_w[:, 3:7])
goal_marker.set_world_poses(ik_commands[:, 0:3] + envs_positions, ik_commands[:, 3:7])
if __name__ == "__main__":
# Run IK example with Manipulator
main()
# Close the simulator
simulation_app.close() |
Beta Was this translation helpful? Give feedback.
-
Thanks for letting know me about RigidContactView. rigid_contact_view = RigidContactView('/World/envs/env_0/Robot/panda_hand', filter_paths_expr=['/World/envs/env_0/Table'], apply_rigid_body_api=False)
rigid_contact_view.initialize() |
Beta Was this translation helpful? Give feedback.
-
@okada-masashi I am checking internally if we can get contact forces between rigid bodies and static colliders on GPU pipeline. Will let you know once I receive a reply on the possible issue above. |
Beta Was this translation helpful? Give feedback.
-
Have you figure out? Im facing the same problems. |
Beta Was this translation helpful? Give feedback.
-
Are there any updates on the original issue? I would like to get the applied (or measured) joint torques when using implicit actuators.
Adding the sensors both in the articulation base class and in my robot class did not help. When querying
|
Beta Was this translation helpful? Give feedback.
-
@Mayankm96 It seems several things have changed and I'm not sure what the best way to get the measured force/torque at a joint is. The original suggestion of using |
Beta Was this translation helpful? Give feedback.
-
@Mayankm96 Is it possible to retrieve the 6-DoF wrench sensing on the links, now? |
Beta Was this translation helpful? Give feedback.
-
I am trying to control
SingleArmManipulator
using the task-space force controller.To do so, I need to get the force/torque that is applied to the end-effector, but the SingleArmManipulatorData does not provide an API to get this information. Is there a way to get force/torque?
I found the following comment in legged_robot.py.
Is it impossible to get force/torque due to IsaacSim limitations?
Beta Was this translation helpful? Give feedback.
All reactions