Replies: 2 comments
-
Thank you for posting this. We have had some delay with the holidays. We should be able to reply to this soon. |
Beta Was this translation helpful? Give feedback.
0 replies
-
Are you using a managed or direct environment? Can you share some of your code? For something like this, you might also want to join us on discord. There's a group that are all trying to do interesting things with Isaac Lab and you might find like-minded people who want to work with you on more experimental stuff like this: |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello,
I am currently attempting to use an asset combining deformable and articulation components in reinforcement learning through IsaacLab. While I came across a post in the NVIDIA Developer Community stating that this case is currently unsupported, I believe exploring this scenario could yield interesting results. While searching for ways to enable this in IsaacLab, I observed different outcomes when spawning the complex asset in the Standalone code versus the RL code.
The complex asset used for testing consists of an articulation made up of two rigid rod links connected by a revolute joint, with a deformable component attached.
In the RL code (train.py), when I created multiple num_envs with this asset, the following error occurred, causing the simulation to function properly in only one environment while failing in the others. (Related video attachment below)
Replication of this type is not supported: ~,prim path: ~/deformable \n Replication of this type is not supported: ~,prim path: ~/attachment
Video.1.mp4
In contrast, when spawning an asset with the same structure in multiple environments using the Standalone code, the simulation ran without any issues under the same conditions. (Related video attachment below)
Video.2.mp4
Below are my questions regarding this phenomenon:
I understand that both codes are based on the same API. Could the difference in outcomes be related to the creation of state tensors in RL for use in observations and rewards?
Is it challenging to modify the RL code to enable the simulation to work? If not, which parts should I adjust?
Would it be feasible to integrate a reinforcement learning framework like rl_games into the Standalone code?
Thank you for taking the time to read my inquiry.
Beta Was this translation helpful? Give feedback.
All reactions