Installation, Setup & Codebase?
Installation, Setup & Codebase?

Installation, Setup & Codebase?

Codebase Structure

image

Make sure to check this README out: GitHub WheeledLab/source/wheeledlab_rl at main · UWRobotLearning/WheeledLabGitHub WheeledLab/source/wheeledlab_rl at main · UWRobotLearning/WheeledLab

How to Get Started

About configclass

The configclass is an important building block of Isaac Lab RL environments. If you know what a python dataclass is, then you already know how a configclass works. This section introduces you to how they’re used in Wheeled Lab (because they’re used everywhere)

Convenient Class Definition

Through dataclass / configclass, these two are equivalent:

class Person:
    def __init__(self, name, age):
        self.name = name
        self.age = age
from dataclasses import dataclass

@dataclass
class Person:
    name: str
    age: int

A dataclass is perfect for classes which mainly hold just data, which tends to be the case for a lot of configuration objects. Because RL environments have so many parameters, you’ll see them everywhere in our config files. Imagine if we had to write the left side for every possible setting we wanted to add!

Inheritance & Overrides

Take a look at these two RL environment configurations:

MushrDriftRLEnvCfg
MushrDriftPlayEnvCfg

The “Play” environment is meant to be a playback environment for our training environment. Except, the training environment has things like: aggressive terminations, robot pushing, rewards, etc., which are designed to help train the robot. We don’t need these things for just playing a trained policy. So, we can simply inherit everything from the RLEnvCfg and override them to define a separate PlayEnvCfg .

Hydra

These configclass objects lend themselves nicely to hydra, a framework for creating hierarchical configs (like the ones in Isaac Lab) and overriding them through the command line interface (CLI).

This lets us do stuff like change the weight of a specific reward when we run training:

python scripts/train_rl.py --headless env.rewards.side_slip.weight=100.0 -r RSS_DRIFT_CONFIG

This becomes more important as you start training more and more models, but you don’t want to affect your latest stable parameters.