A Minimal Example of Isaac Gym with DQN and PPO
This repository provides a minimal example of NVIDIA’s Isaac Gym, to assist other researchers like me to quickly understand the code structure, to be able to design fully customised large-scale reinforcement learning experiments.
The example is based on the official implementation from the Isaac Gym’s Benchmark Experiments, for which we have followed a similar implementation on Cartpole, but with a minimal number of lines of code aiming for maximal readability, and without using any third-party RL frameworks.
Note: The current implementation is based on Isaac Gym Preview Version 3, with the support for two RL algorithms: DQN and PPO. PPO seems to be the default RL algorithm for Isaac Gym from the recent works of Learning to walk and Object Re-orientation,