Instructions to use IliaLarchenko/dot_transfer_cube with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- LeRobot
How to use IliaLarchenko/dot_transfer_cube with LeRobot:
- Notebooks
- Google Colab
- Kaggle
| library_name: lerobot | |
| tags: | |
| - model_hub_mixin | |
| - pytorch_model_hub_mixin | |
| - robotics | |
| - dot | |
| license: apache-2.0 | |
| datasets: | |
| - lerobot/aloha_sim_transfer_cube_human | |
| pipeline_tag: robotics | |
| # Model Card for "Decoder Only Transformer (DOT) Policy" for ALOHA cube transfer problem | |
| Read more about the model and implementation details in the [DOT Policy repository](https://github.com/IliaLarchenko/dot_policy). | |
| This model is trained using the [LeRobot library](https://huggingface.co/lerobot) and achieves state-of-the-art results on behavior cloning on ALOHA bimanual insert dataset. It achieves 92.6% success rate vs. 83% for the previous state-of-the-art model (ACT). (Note: it looks like the LeRobot implementation is not deterministic of environment makes it easier than the original problem, I am comparing it with https://huggingface.co/lerobot/act_aloha_sim_transfer_cube_human). | |
| You can use this model by installing LeRobot from [this branch](https://github.com/IliaLarchenko/lerobot/tree/dot_new_config) | |
| To train the model: | |
| ```bash | |
| python lerobot/scripts/train.py \ | |
| --policy.type=dot \ | |
| --dataset.repo_id=lerobot/aloha_sim_transfer_cube_human \ | |
| --env.type=aloha \ | |
| --env.task=AlohaTransferCube-v0 \ | |
| --output_dir=outputs/train/pusht_aloha_transfer_cube \ | |
| --batch_size=24 \ | |
| --log_freq=1000 \ | |
| --eval_freq=5000 \ | |
| --save_freq=5000 \ | |
| --offline.steps=100000 \ | |
| --seed=100000 \ | |
| --wandb.enable=true \ | |
| --num_workers=24 \ | |
| --use_amp=true \ | |
| --device=cuda \ | |
| --policy.optimizer_lr=0.0001 \ | |
| --policy.optimizer_min_lr=0.0001 \ | |
| --policy.optimizer_lr_cycle_steps=100000 \ | |
| --policy.train_horizon=75 \ | |
| --policy.inference_horizon=50 \ | |
| --policy.lookback_obs_steps=20 \ | |
| --policy.lookback_aug=5 \ | |
| --policy.rescale_shape="[480,640]" \ | |
| --policy.alpha=0.98 \ | |
| --policy.train_alpha=0.99 \ | |
| --wandb.project=transfer_cube | |
| ``` | |
| To evaluate the model: | |
| ```bash | |
| python lerobot/scripts/eval.py \ | |
| --policy.path=IliaLarchenko/dot_transfer_cube \ | |
| --env.type=aloha \ | |
| --env.task=AlohaTransferCube-v0 \ | |
| --eval.n_episodes=1000 \ | |
| --eval.batch_size=100 \ | |
| --seed=1000000 | |
| ``` | |
| Model size: | |
| - Total parameters: 14.1m | |
| - Trainable parameters: 2.9m | |