Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EgoMimic Changes #200

Open
wants to merge 55 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
55 commits
Select commit Hold shift + click to select a range
ffe6d3b
setup for LIBERO envs
j96w Sep 3, 2023
57fd17c
libero reset from xml
j96w Sep 12, 2023
784f6a0
crop rand, wandb utils, keep just best ckpts
SimarKareer Jan 13, 2024
c420dc7
added dinov2
Matnay Feb 21, 2024
dbd8453
Added DINOv2 to base nets
Matnay Mar 9, 2024
c199541
added dino with lora adaptation
Matnay Apr 1, 2024
643c68d
reverting change in env_robosuite
Matnay Apr 3, 2024
b9c8395
reverting change in env_robosuite
Matnay Apr 3, 2024
1c1977d
reverting change in env_robosuite
Matnay Apr 3, 2024
dd00ec5
Merge pull request #1 from SimarKareer/dev/Vit
Matnay Apr 3, 2024
aae4841
rm env type check
touristCheng Apr 4, 2024
6a65430
dev 2 dataloaders
Apr 14, 2024
943856d
increased rdcc nbytes
SimarKareer Apr 18, 2024
8e5e11a
cherry pick initial act commit
tonyzhaozh Sep 17, 2023
d2f27e6
merged second cherrypick
snasiriany Sep 17, 2023
f219b66
training ACT in Egoplay
SimarKareer May 2, 2024
09f4951
moved act into repo instead of submodule
SimarKareer May 2, 2024
14beb00
training act on robomimic
SimarKareer May 2, 2024
20e127c
merge
SimarKareer May 2, 2024
0ae9c4b
dev 2 dataloaders - single train script
Dhruv2012 May 7, 2024
ea7718c
Merge pull request #3 from SimarKareer/act
SimarKareer May 9, 2024
cb6d738
Merge pull request #2 from SimarKareer/dev/2_dataloaders
SimarKareer May 10, 2024
9c7533d
added ac_key to model object
SimarKareer May 10, 2024
f033b1d
Merge branch 'master' of https://github.com/SimarKareer/robomimic
SimarKareer May 10, 2024
b6d61f9
moved act from robomimic to eplay
SimarKareer May 16, 2024
2068768
Sequence dataset can now interpolate sequence length for low dim keys
SimarKareer May 23, 2024
20e374e
added CropResizeColorRandomizer for color jitter
SimarKareer May 31, 2024
1dde468
Merge pull request #4 from SimarKareer/master
SimarKareer Jun 6, 2024
e3a9163
merged and added ac_key support
SimarKareer Jun 6, 2024
6de48bd
Merge branch 'singlePolicyv2' of https://github.com/SimarKareer/robom…
SimarKareer Jun 6, 2024
638ba2b
Merge pull request #5 from SimarKareer/singlePolicyv2
SimarKareer Jun 11, 2024
498c995
added ranges for color jitter params
SimarKareer Jun 12, 2024
00807ca
Merge pull request #6 from SimarKareer/singlePolicyv2
SimarKareer Jun 12, 2024
4958d8c
black format
SimarKareer Jun 13, 2024
652de8d
integrated dual dataloader ee_pose normalization, tested
SimarKareer Jun 22, 2024
6b96158
Merge pull request #7 from SimarKareer/dualNorm
SimarKareer Jun 22, 2024
014e9b6
obs norm takes in ac_key to normalize correct key
SimarKareer Jul 1, 2024
616e551
GMM works with prestacked actions
SimarKareer Jul 12, 2024
691627e
return patch tokens for ViT instead of linear classifier
SimarKareer Jul 22, 2024
5783f01
removed unintended plt save
SimarKareer Jul 31, 2024
f0dcd7a
move norm stats to cuda if needed
rl2aloha Aug 12, 2024
63ac6ce
normalize actions option
SimarKareer Aug 19, 2024
1e01423
action norm stats
SimarKareer Aug 23, 2024
1dfd46a
unnorm case bug fixed
Dhruv2012 Sep 9, 2024
10cd231
Merge pull request #8 from SimarKareer/bimanual
SimarKareer Oct 18, 2024
a05e1d0
revert black formatting for PR
SimarKareer Oct 18, 2024
d6eed86
revert readme changes
SimarKareer Oct 18, 2024
ffca93b
cleanup unused changes
SimarKareer Oct 21, 2024
16a47f5
removed seq_length_to_load
SimarKareer Oct 22, 2024
a70b19a
formatting differences
SimarKareer Oct 22, 2024
0932f80
changes to add imagenet normalize
ryanthecreator Nov 26, 2024
1d16a96
imagenet normalization
ryanthecreator Nov 27, 2024
647e4fa
hpt fixes for robomimic
ryanthecreator Dec 1, 2024
53f9d69
added radio to vit class, cleaned up some things and added peft lora
ryanthecreator Dec 21, 2024
3d65f03
added diffusion policy support for our robomimic stuff
ryanthecreator Dec 23, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -123,3 +123,5 @@ venv.bak/

# private macros
macros_private.py
*.pyc
act/detr/models/__pycache__
2 changes: 1 addition & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -10,4 +10,4 @@ imageio-ffmpeg
matplotlib
egl_probe>=1.0.1
torch
torchvision
torchvision
23 changes: 16 additions & 7 deletions robomimic/algo/algo.py
Original file line number Diff line number Diff line change
Expand Up @@ -118,6 +118,7 @@ def __init__(
self.global_config = global_config

self.ac_dim = ac_dim
self.ac_key = global_config.train.ac_key
self.device = device
self.obs_key_shapes = obs_key_shapes

Expand Down Expand Up @@ -201,7 +202,7 @@ def process_batch_for_training(self, batch):
"""
return batch

def postprocess_batch_for_training(self, batch, obs_normalization_stats):
def postprocess_batch_for_training(self, batch, normalization_stats, normalize_actions=True):
"""
Does some operations (like channel swap, uint8 to float conversion, normalization)
after @process_batch_for_training is called, in order to ensure these operations
Expand All @@ -222,7 +223,11 @@ def postprocess_batch_for_training(self, batch, obs_normalization_stats):
"""

# ensure obs_normalization_stats are torch Tensors on proper device
obs_normalization_stats = TensorUtils.to_float(TensorUtils.to_device(TensorUtils.to_tensor(obs_normalization_stats), self.device))
normalization_stats = TensorUtils.to_float(
TensorUtils.to_device(
TensorUtils.to_tensor(normalization_stats), self.device
)
)

# we will search the nested batch dictionary for the following special batch dict keys
# and apply the processing function to their values (which correspond to observations)
Expand All @@ -236,14 +241,16 @@ def recurse_helper(d):
if k in obs_keys:
# found key - stop search and process observation
if d[k] is not None:
d[k] = ObsUtils.process_obs_dict(d[k])
if obs_normalization_stats is not None:
d[k] = ObsUtils.normalize_obs(d[k], obs_normalization_stats=obs_normalization_stats)
d[k] = ObsUtils.process_obs_dict(d[k], imagenet_normalize=self.global_config.train.imagenet_normalize_images)
elif isinstance(d[k], dict):
# search down into dictionary
recurse_helper(d[k])

recurse_helper(batch)
if normalization_stats is not None:
batch = ObsUtils.normalize_batch(
batch, normalization_stats=normalization_stats, normalize_actions=normalize_actions
)
return batch

def train_on_batch(self, batch, epoch, validate=False):
Expand Down Expand Up @@ -502,8 +509,10 @@ def _prepare_observation(self, ob):
# ensure obs_normalization_stats are torch Tensors on proper device
obs_normalization_stats = TensorUtils.to_float(TensorUtils.to_device(TensorUtils.to_tensor(self.obs_normalization_stats), self.policy.device))
# limit normalization to obs keys being used, in case environment includes extra keys
ob = { k : ob[k] for k in self.policy.global_config.all_obs_keys }
ob = ObsUtils.normalize_obs(ob, obs_normalization_stats=obs_normalization_stats)
ob = {k: ob[k] for k in self.policy.global_config.all_obs_keys}
ob = ObsUtils.normalize_batch(
ob, obs_normalization_stats=obs_normalization_stats
)
return ob

def __repr__(self):
Expand Down
3 changes: 2 additions & 1 deletion robomimic/algo/bc.py
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,8 @@ def process_batch_for_training(self, batch):
will be used for training
"""
input_batch = dict()
input_batch["obs"] = {k: batch["obs"][k][:, 0, :] for k in batch["obs"]}
#input_batch["obs"] = {k: batch["obs"][k][:, 0, :] for k in batch["obs"]}
input_batch["obs"] = {k: v[:, 0, :] if v.ndim != 1 else v for k, v in batch['obs'].items()}
input_batch["goal_obs"] = batch.get("goal_obs", None) # goals may not be present
input_batch["actions"] = batch["actions"][:, 0, :]
# we move to device first before float conversion because image observation modalities will be uint8 -
Expand Down
Loading