Skip to content

Commit

Permalink
Fix typo in Anymal C DirectRL environment (#1683)
Browse files Browse the repository at this point in the history
# Description

This PR fixes a small typo in the code, where "undesired" is
accidentally typed as "undersired".

## Type of change

- [x] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing
functionality to not work as expected)
- [ ] This change requires a documentation update

## Screenshots

N/A

## Checklist

- [x] I have run the [`pre-commit` checks](https://pre-commit.com/) with
`./isaaclab.sh --format`
- [ ] I have made corresponding changes to the documentation
- [x] My changes generate no new warnings
- [ ] I have added tests that prove my fix is effective or that my
feature works
- [ ] I have updated the changelog and the corresponding version in the
extension's `config/extension.toml` file
- [ ] I have added my name to the `CONTRIBUTORS.md` or my name already
exists there
  • Loading branch information
T-K-233 authored and hapatel-bdai committed Jan 21, 2025
1 parent 89e9ccc commit de2fe97
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 3 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ def _get_rewards(self) -> torch.Tensor:
air_time = torch.sum((last_air_time - 0.5) * first_contact, dim=1) * (
torch.norm(self._commands[:, :2], dim=1) > 0.1
)
# undersired contacts
# undesired contacts
net_contact_forces = self._contact_sensor.data.net_forces_w_history
is_contact = (
torch.max(torch.norm(net_contact_forces[:, :, self._undesired_contact_body_ids], dim=-1), dim=1)[0] > 1.0
Expand All @@ -148,7 +148,7 @@ def _get_rewards(self) -> torch.Tensor:
"dof_acc_l2": joint_accel * self.cfg.joint_accel_reward_scale * self.step_dt,
"action_rate_l2": action_rate * self.cfg.action_rate_reward_scale * self.step_dt,
"feet_air_time": air_time * self.cfg.feet_air_time_reward_scale * self.step_dt,
"undesired_contacts": contacts * self.cfg.undersired_contact_reward_scale * self.step_dt,
"undesired_contacts": contacts * self.cfg.undesired_contact_reward_scale * self.step_dt,
"flat_orientation_l2": flat_orientation * self.cfg.flat_orientation_reward_scale * self.step_dt,
}
reward = torch.sum(torch.stack(list(rewards.values())), dim=0)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ class AnymalCFlatEnvCfg(DirectRLEnvCfg):
joint_accel_reward_scale = -2.5e-7
action_rate_reward_scale = -0.01
feet_air_time_reward_scale = 0.5
undersired_contact_reward_scale = -1.0
undesired_contact_reward_scale = -1.0
flat_orientation_reward_scale = -5.0


Expand Down

0 comments on commit de2fe97

Please sign in to comment.