The Farama Foundation maintains a number of other projects, most of which use Gymnasium. Topics include: multi-agent RL (PettingZoo), offline-RL (Minari), gridworlds (Minigrid), robotics (Gymnasium-Robotics), multi-objective RL (MO-Gymnasium) many-agent RL (MAgent2), 3D navigation (Miniworld), and many more.
Third-party environments with Gymnasium#
This page contains environments which are not maintained by Farama Foundation and, as such, cannot be guaranteed to function as intended.
If you’d like to contribute an environment, please reach out on Discord.
Contextual extensions of popular reinforcement learning environments that enable training and test distributions for generalization, e.g. CartPole with variable pole lengths or Brax robots with different ground frictions.
A benchmark library for Dynamic Algorithm Configuration. Its focus is on reproducibility and comparability of different DAC methods as well as easy analysis of the optimization process.
Flappy Bird as a Farama Gymnasium environment.
A simple environment for single-agent reinforcement learning algorithms on a clone of Flappy Bird, the hugely popular arcade-style mobile game. Both state and pixel observation environments are available.
Environments where the agent interacts with Cellular Automata by changing its cell states.
gym-jiminy presents an extension of the initial Gym for robotics using Jiminy, an extremely fast and light-weight simulator for poly-articulated systems using Pinocchio for physics evaluation and Meshcat for web-based 3D rendering.
An environment for guiding automated theorem provers based on saturation algorithms (e.g. Vampire).
Gym Trading Env simulates stock (or crypto) market from historical data. It was designed to be fast and easily customizable.
An environment for behavioral planning in autonomous driving, with an emphasis on high-level perception and decision rather than low-level sensing and control.
An environment to easily implement discrete MDPs as gym environments. Turn a set of matrices (
P(s'| s, a) and
R(s', s, a)) into a gym environment that represents the discrete MDP ruled by these dynamics.
An open, minimalist Gymnasium environment for autonomous coordination in wireless mobile networks.
PyBullet based simulations of a robotic arm moving objects.
QWOP is a game about running extremely fast down a 100 meter track. With this Gymnasium environment you can train your own agents and try to beat the current world record (5.0 in-game seconds for humans and 4.7 for AI).
Highly scalable and customizable Safe Reinforcement Learning library.
SimpleGrid is a super simple and minimal grid environment for Gymnasium. It is easy to use and customise and it is intended to offer an environment for rapidly testing and prototyping different RL algorithms.
spark-sched-sim simulates Spark clusters for RL-based job scheduling algorithms. Spark jobs are encoded as directed acyclic graphs (DAGs), providing opportunities to experiment with graph neural networks (GNN’s) in the RL context.
Supported fork of gym-retro: turn classic video games into Gymnasium environments.
Gymnasium wrapper for various environments in the SUMO traffic simulator. Supports both single and multiagent settings (using pettingzoo).
tmrl is a distributed framework for training Deep Reinforcement Learning AIs in real-time applications. It is demonstrated on the TrackMania 2020 video game.
Third-Party Environments using Gym#
There are a large number of third-party environments using various versions of Gym. Many of these can be adapted to work with gymnasium (see Compatibility with Gym), but are not guaranteed to be fully functional.
Video Game environments#
A 3v3 MOBA environment where you train creatures to fight each other.
A simple environment for benchmarking single and multi-agent reinforcement learning algorithms on a clone of Slime Volleyball game.
Gym (and PettingZoo) wrappers for arbitrary and premade environments with the Unity game engine.
A library for testing reinforcement learning algorithms on various UAVs. It is built on the Bullet physics engine, offers flexible rendering options, time-discrete steppable physics, Python bindings, and support for custom drones of any configuration, be it biplanes, quadcopters, rockets, and anything you can think of.
Mars Explorer is a Gym compatible environment designed and developed as an initial endeavor to bridge the gap between powerful Deep Reinforcement Learning methodologies and the problem of exploration/coverage of an unknown terrain.
Robo-gym provides a collection of reinforcement learning environments involving robotic tasks applicable in both simulation and real-world robotics.
Gym environments that let you control real robots in a laboratory via the internet.
Evaluate safety, robustness and generalization via PyBullet based CartPole and Quadrotor environments—with CasADi (symbolic) a priori dynamics and constraints.
A large-scale benchmark for co-optimizing the design and control of soft robots.
A simulation environment with high-quality realistic scenes, with interactive physics using PyBullet.
This is a library that provides dual dexterous hand manipulation tasks through Isaac Gym.
Reinforcement Learning Environments for Omniverse Isaac simulator.
Autonomous Driving environments#
A lane-following simulator built for the Duckietown project (small-scale self-driving car course).
An environment for simulating a wide variety of electric drives taking into account different types of electric motors and converters.
A Gym for solving motion planning problems for various traffic scenarios compatible with CommonRoad benchmarks, which provides configurable rewards, action spaces, and observation spaces.
Train a model-based RL agent in simulation and, without finetuning, transfer it to small-scale race cars.
An open-source reinforcement learning environment for autonomous racing.
A gym environment for a miniature racecar using the PyBullet physics engine.
Connect-4-Gym is an environment designed for creating AIs that learn by playing against themselves and assigning them an Elo rating. This environment can be used to train and evaluate reinforcement learning agents on the classic board game Connect Four.
Reinforcement learning environments for compiler optimization tasks, such as LLVM phase ordering, GCC flag tuning, and CUDA loop nest code generation.
The environment consists of transportation puzzles in which the player’s goal is to push all boxes to the warehouse’s storage locations.
NLPGym provides interactive environments for standard NLP tasks such as sequence tagging, question answering, and sequence classification.
ShinRL: A Library for Evaluating RL Algorithms from Theoretical and Practical Perspectives (Deep RL Workshop 2021)
RL Environments in JAX which allows for highly vectorised environments with support for a number of environments, Gym, MinAtari, bsuite and more.
AnyTrading is a collection of Gym environments for reinforcement learning-based trading algorithms with a great focus on simplicity, flexibility, and comprehensiveness.
MtSim is a simulator for the MetaTrader 5 trading platform for reinforcement learning-based trading algorithms.
The OpenModelica Microgrid Gym (OMG) package is a software toolbox for the simulation and control optimization of microgrids based on energy conversion by power electronic converters.
GymFC is a modular framework for synthesizing neuro-flight controllers. Has been used to generate policies for the world’s first open-source neural network flight control firmware Neuroflight.