Gym Release Notes#

0.26.2#

Released on 2022-10-04 - GitHub - PyPI

Release notes

This is another very minor bug release.

Bugs Fixes

  • As reset now returns (obs, info) then in the vector environments, this caused the final step's info to be overwritten. Now, the final observation and info are contained within the info as "final_observation" and "final_info" @pseudo-rnd-thoughts
  • Adds warnings when trying to render without specifying the render_mode @younik
  • Updates Atari Preprocessing such that the wrapper can be pickled @vermouth1992
  • Github CI was hardened to such that the CI just has read permissions @sashashura
  • Clarify and fix typo in GraphInstance @ekalosak

0.26.1#

Released on 2022-09-16 - GitHub - PyPI

Release Notes

This is a very minor bug fix release for 0.26.0

Bug Fixes

  • #3072 - Previously mujoco was a necessary module even if only mujoco-py was used. This has been fixed to allow only mujoco-py to be installed and used. @YouJiacheng
  • #3076 - PixelObservationWrapper raises an exception if the env.render_mode is not specified. @vmoens
  • #3080 - Fixed bug in CarRacing where the colour of the wheels were not correct @foxik
  • #3083 - Fixed BipedalWalker where if the agent moved backwards then the rendered arrays would be a different size. @younik

Spelling

  • Fixed truncation typo in readme API example @rdnfn
  • Updated pendulum observation space from angle to theta to make more consistent @ikamensh

0.26.0#

Released on 2022-09-06 - GitHub - PyPI

Release notes for v0.26.0

This release is aimed to be the last of the major API changes to the core API. All of the previously "turned off" changes of the base API (step termination / truncation, reset info, no seed function, render mode determined by initialization) are now expected by default. We still plan to make breaking changes to Gym itself, but to things that are very easy to upgrade (environments and wrappers), and things that aren't super commonly used (the vector API). Once those aspects are stabilized, we'll do a proper 1.0 release and follow semantic versioning. Additionally, unless something goes terribly wrong with this release and we have to release a patched version, this will be the last release of Gym for a while.

If you've been waiting for a "stable" release of Gym to upgrade your project given all the changes that have been going on, this is the one.

We also just wanted to say that we tremendously appreciate the communities patience with us as we've gone on this journey taking over the maintenance of Gym and making all of these huge changes to the core API. We appreciate your patience and support, but hopefully, all the changes from here on out will be much more minor.

Breaking backward compatibility

These changes are true of all gym's internal wrappers and environments but for environments not updated, we provide the EnvCompatibility wrapper for users to convert old gym v21 / 22 environments to the new core API. This wrapper can be easily applied in gym.make and gym.register through the apply_api_compatibility parameters.

  • Step Termination / truncation - The Env.step function returns 5 values instead of 4 previously (observations, reward, termination, truncation, info). A blog with more details will be released soon to explain this decision. @arjun-kg
  • Reset info - The Env.reset function returns two values (obs and info) with no return_info parameter for gym wrappers and environments. This is important for some environments that provided action masking information for each actions which was not possible for resets. @balisujohn
  • No Seed function - While Env.seed was a helpful function, this was almost solely used for the beginning of the episode and is added to gym.reset(seed=...). In addition, for several environments like Atari that utilise external random number generators, it was not possible to set the seed at any time other than reset. Therefore, seed is no longer expected to function within gym environments and is removed from all gym environments @balisujohn
  • Rendering - It is normal to only use a single render mode and to help open and close the rendering window, we have changed Env.render to not take any arguments and so all render arguments can be part of the environment's constructor i.e., gym.make("CartPole-v1", render_mode="human"). For more detail on the new API, see blog post @younik

Major changes

  • Render modes - In v25, there was a change in the meaning of render modes, i.e. "rgb_array" returned a list of rendered frames with "single_rgb_array" returned a single frame. This has been reverted in this release with "rgb_array" having the same meaning as previously to return a single frame with a new mode "rgb_array_list" returning a list of RGB arrays. The capability to return a list of rendering observations achieved through a wrapper applied during gym.make. #3040 @pseudo-rnd-thoughts @younik
  • Added save_video that uses moviepy to render a list of RGB frames and updated RecordVideo to use this function. This removes support for recording ansi outputs. #3016 @younik
  • RandomNumberGenerator functions: rand, randn, randint, get_state, set_state, hash_seed, create_seed, _bigint_from_bytes and _int_list_from_bigint have been removed. @balisujohn
  • Bump ale-py to 0.8.0 which is compatibility with the new core API
  • Added EnvAPICompatibility wrapper @RedTachyon

Minor changes

0.25.2#

Released on 2022-08-18 - GitHub - PyPI

Release notes for v0.25.2

This is a fairly minor bug fix release.

Bug Fixes

  • Removes requirements for _TimeLimit.truncated in info for step compatibility functions. This makes the step compatible with Envpool @arjun-kg
  • As the ordering of Dict spaces matters when flattening spaces, updated the __eq__ to account for the .keys() ordering. @XuehaiPan
  • Allows CarRacing environment to be pickled. Updated all gym environments to be correctly pickled. @RedTachyon
  • seeding Dict and Tuple spaces with integers can cause lower-specification computers to hang due to requiring 8Gb memory. Updated the seeding with integers to not require unique subseeds (subseed collisions are rare). For users that require unique subseeds for all subspaces, we recommend using a dictionary or tuple with the subseeds. @olipinski
  • Fixed the metaclass implementation for the new render api to allow custom environments to use metaclasses as well. @YouJiacheng

Updates

  • Simplifies the step compatibility functions to make them easier to debug. Time limit wrapper with the old step API favours terminated over truncated if both are true. This is as the old done step API can only encode 3 states (cannot encode terminated=True and truncated=True) therefore we must encode to only terminated=True or truncated=True. @pseudo-rnd-thoughts
  • Add Swig as a dependency @kir0ul
  • Add type annotation for render_mode and metadata @bkrl

0.25.1#

Released on 2022-07-26 - GitHub - PyPI

Release notes

  • Added rendering for CliffWalking environment @younik
  • PixelObservationWrapper only supports the new render API due to difficulty in supporting both old and new APIs. A warning is raised if the user is using the old API @vmoens

Bug fix

  • Revert an incorrect edition on wrapper.FrameStack @ZhiqingXiao
  • Fix reset bounds for mountain car @psc-g
  • Removed skipped tests causing bugs not to be caught @pseudo-rnd-thoughts
  • Added backward compatibility for environments without metadata @pseudo-rnd-thoughts
  • Fixed BipedalWalker rendering for RGB arrays @1b15
  • Fixed bug in PixelObsWrapper for using the new rendering @younik

Typos

  • Rephrase observations' definition in Lunar Lander Environment @EvanMath
  • Top-docstring in gym/spaces/dict.py @Ice1187
  • Several typos in humanoidstandup_v4.py, mujoco_env.py, and vector_list_info.py @timgates42
  • Typos in passive environment checker @pseudo-rnd-thoughts
  • Typos in Swimmer rotations @lin826

0.25.0#

Released on 2022-07-13 - GitHub - PyPI

Release notes

This release finally introduces all new API changes that have been planned for the past year or more, all of which will be turned on by default in a subsequent release. After this point, Gym development should get massively smoother. This release also fixes large bugs present in 0.24.0 and 0.24.1, and we highly discourage using those releases.

API Changes

  • Step - A majority of deep reinforcement learning algorithm implementations are incorrect due to an important difference in theory and practice as done is not equivalent to termination. As a result, we have modified the step function to return five values, obs, reward, termination, truncation, info. The full theoretical and practical reason (along with example code changes) for these changes will be explained in a soon-to-be-released blog post. The aim for the change to be backward compatible (for now), for issues, please put report the issue on github or the discord. @arjun-kg
  • Render - The render API is changed such that the mode has to be specified during gym.make with the keyword render_mode, after which, the render mode is fixed. For further details see https://younis.dev/blog/2022/render-api/ and #2671. This has the additional changes
    • with render_mode="human" you don't need to call .render(), rendering will happen automatically on env.step()
    • with render_mode="rgb_array", .render() pops the list of frames rendered since the last .reset()
    • with render_mode="single_rgb_array", .render() returns a single frame, like before.
  • Space.sample(mask=...) allows a mask when sampling actions to enable/disable certain actions from being randomly sampled. We recommend developers add this to the info parameter returned by reset(return_info=True) and step. See #2906 for example implementations of the masks or the individual spaces. We have added an example version of this in the taxi environment. @pseudo-rnd-thoughts
  • Add Graph for environments that use graph style observation or action spaces. Currently, the node and edge spaces can only be Box or Discrete spaces. @jjshoots
  • Add Text space for Reinforcement Learning that involves communication between agents and have dynamic length messages (otherwise MultiDiscrete can be used). @ryanrudes @pseudo-rnd-thoughts

Bug fixes

  • Fixed car racing termination where if the agent finishes the final lap, then the environment ends through truncation not termination. This added a version bump to Car racing to v2 and removed Car racing discrete in favour of gym.make("CarRacing-v2", continuous=False) @araffin
  • In v0.24.0, opencv-python was an accidental requirement for the project. This has been reverted. @KexianShen @pseudo-rnd-thoughts
  • Updated utils.play such that if the environment specifies keys_to_action, the function will automatically use that data. @Markus28
  • When rendering the blackjack environment, fixed bug where rendering would change the dealers top car. @balisujohn
  • Updated mujoco docstring to reflect changes that were accidently overwritten. @Markus28

Misc

  • The whole project is partially type hinted using pyright (none of the project files is ignored by the type hinter). @RedTachyon @pseudo-rnd-thoughts (Future work will add strict type hinting to the core API)
  • Action masking added to the taxi environment (no version bump due to being backwards compatible) @pseudo-rnd-thoughts
  • The Box space shape inference is allows high and low scalars to be automatically set to (1,) shape. Minor changes to identifying scalars. @pseudo-rnd-thoughts
  • Added option support in classic control environment to modify the bounds on the initial random state of the environment @psc-g
  • The RecordVideo wrapper is becoming deprecated with no support for TextEncoder with the new render API. The plan is to replace RecordVideo with a single function that will receive a list of frames from an environment and automatically render them as a video using MoviePy. @johnMinelli
  • The gym py.Dockerfile is optimised from 2Gb to 1.5Gb through a number of optimisations @TheDen

0.24.1#

Released on 2022-06-07 - GitHub - PyPI

This is a bug fix release for version 0.24.0

Bugs fixed:

  • Replaced the environment checker introduced in V24, such that the environment checker will not call step and reset during make. This new version is a wrapper that will observe the data that step and reset returns on their first call and check the data against the environment checker. @pseudo-rnd-thoughts
  • Fixed MuJoCo v4 arguments key callback, closing the environment in renderer and the mujoco_rendering close method. @rodrigodelazcano
  • Removed redundant warning in registration @RedTachyon
  • Removed maths operations from MuJoCo xml files @quagla
  • Added support for unpickling legacy spaces.Box @pseudo-rnd-thoughts
  • Fixed mujoco environment action and observation space docstring tables @pseudo-rnd-thoughts
  • Disable wrappers from accessing _np_random property and np_random is now forwarded to environments @pseudo-rnd-thoughts
  • Rewrite setup.py to add a "testing" meta dependency group @pseudo-rnd-thoughts
  • Fixed docstring in rescale_action wrapper @gianlucadecola

0.24.0#

Released on 2022-05-25 - GitHub - PyPI

Major changes

  • Added v4 mujoco environments that use the new deepmind mujoco 2.2.0 module.
    This can be installed through pip install gym[mujoco] with the old bindings still being
    available using the v3 environments and pip install gym[mujoco-py].
    These new v4 environment should have the same training curves as v3. For the Ant, we found that there was a
    contact parameter that was not applied in v3 that can enabled in v4 however was found to produce significantly
    worse performance see comment for more details. @rodrigodelazcano
  • The vector environment step info API has been changes to allow hardware acceleration in the future.
    See this PR for the modified info style that now uses dictionaries instead of a list of environment info.
    If you still wish to use the list info style, then use the VectorListInfo wrapper. @gianlucadecola
  • On gym.make, the gym env_checker is run that includes calling the environment reset and step to check if the
    environment is compliant to the gym API. To disable this feature, run gym.make(..., disable_env_checker=True). @RedTachyon
  • Re-added gym.make("MODULE:ENV") import style that was accidentally removed in v0.22 @arjun-kg
  • Env.render is now order enforced such that Env.reset is required before Env.render is called. If this a required
    feature then set the OrderEnforcer wrapper disable_render_order_enforcing=True. @pseudo-rnd-thoughts
  • Added wind and turbulence to the Lunar Lander environment, this is by default turned off,
    use the wind_power and turbulence parameter. @virgilt
  • Improved the play function to allows multiple keyboard letter to pass instead of ascii value @Markus28
  • Added google style pydoc strings for most of the repositories @pseudo-rnd-thoughts @Markus28
  • Added discrete car racing environment version through gym.make("CarRacing-v1", continuous=False)
  • Pygame is now an optional module for box2d and classic control environments that is only necessary for rendering.
    Therefore, install pygame using pip install gym[box2d] or pip install gym[classic_control] @gianlucadecola @RedTachyon
  • Fixed bug in batch spaces (used in VectorEnv) such that the original space's seed was ignored @pseudo-rnd-thoughts
  • Added AutoResetWrapper that automatically calls Env.reset when Env.step done is True @balisujohn

Minor changes

  • BipedalWalker and LunarLander's observation spaces have non-infinite upper and lower bounds. @jjshoots
  • Bumped the ALE-py version to 0.7.5
  • Improved the performance of car racing through not rendering polygons off screen @andrewtanJS
  • Fixed turn indicators that were black not red/white in Car racing @jjshoots
  • Bug fixes for VecEnvWrapper to forward method calls to the environment @arjun-kg
  • Removed unnecessary try except on Box2d such that if Box2d is not installed correctly then a more helpful error is show @pseudo-rnd-thoughts
  • Simplified the gym.registry backend @RedTachyon
  • Re-added python 3.6 support through backports of python 3.7+ modules. This is not tested or compatible with the mujoco environments. @pseudo-rnd-thoughts

0.23.1#

Released on 2022-03-11 - GitHub - PyPI

This release contains a few small bug fixes and no breaking changes.

0.23.0#

Released on 2022-03-04 - GitHub - PyPI

This release contains many bug fixes and a few small changes.

Breaking changes:

Many minor bug fixes (@vwxyzjn , @RedTachyon , @rusu24edward , @Markus28 , @dsctt , @andrewtanJS , @tristandeleu , @duburcqa)

0.22.0#

Released on 2022-02-17 - GitHub - PyPI

This release represents the largest set of changes ever to Gym, and represents a huge step towards the plans for 1.0 outlined here: #2524

Gym now has a new comprehensive documentation site: https://www.gymlibrary.ml/ !

API changes:

-env.reset now accepts three new arguments:

options- Usable for things like controlling curriculum learning without reinitializing the environment, which can be expensive (@RedTachyon)
seed- Environment seeds can be passed to this reset argument in the future. The old .seed() method is being deprecated in favor of this, though it will continue to function as before until the 1.0 release for backwards compatibility purposes (@RedTachyon)
infos- when set to True, reset will return obs, info. This currently defaults to False, but will become the default behavior in Gym 1.0 (@RedTachyon)

-Environment names no longer require a version during registration and will suggest intelligent similar names (@kir0ul, @JesseFarebro)

-Vector environments now support terminal_observation in info and support batch action spaces (@vwxyzjn, @tristandeleu)

Environment changes:
-The blackjack and frozen lake toy_text environments now have nice graphical rendering using PyGame (@1b15)
-Moved robotics environments to gym-robotics package (@seungjaeryanlee, @Rohan138, @vwxyzjn) (per discussion in #2456 (comment))
-The bipedal walker and lunar lander environments were consolidated into one class (@andrewtanJS)
-Atari environments now use standard seeding API (@JesseFarebro)
-Fixed large bug fixes in car_racing box2d environment, bumped version (@carlosluis, @araffin)
-Refactored all box2d and classic_control environments to use PyGame instead of Pyglet as issues with pyglet has been one of the most frequent sources of GitHub issues over the life of the gym project (@andrewtanJS)

Other changes:
-Removed DiscreteEnv class, built in environments no longer use it (@carlosluis)
-Large numbers of type hints added (@ikamensh, @RedTachyon)
-Python 3.10 support
-Tons of additional code refactoring, cleanup, error message improvements and small bug fixes (@vwxyzjn, @Markus28, @RushivArora, @jjshoots, @XuehaiPan, @Rohan138, @JesseFarebro, @Ericonaldo, @AdilZouitine, @RedTachyon)
-All environment files now have dramatically improved readmes at the top (that the documentation website automatically pulls from)
-As part of the seeding changes, Gym's RNG has been modified to use the np.random.Generator as the RandomState API has been deprecated. The methods randint, rand, randn are replaced by integers, random and standard_normal respectively. As a consequence, the random number generator has changed from MT19937 to PCG64.

v0.21.0: 0.21.0#

Released on 2021-10-02 - GitHub - PyPI

-The old Atari entry point that was broken with the last release and the upgrade to ALE-Py is fixed (@JesseFarebro)
-Atari environments now give much clearer error messages and warnings (@JesseFarebro)
-A new plugin system to enable an easier inclusion of third party environments has been added (@JesseFarebro)
-Atari environments now use the new plugin system to prevent clobbered names and other issues (@JesseFarebro)
-pip install gym[atari] no longer distributes Atari ROMs that the ALE (the Atari emulator used) needs to run the various games. The easiest way to install ROMs into the ALE has been to use AutoROM. Gym now has a hook to AutoROM for easier CI automation so that using pip install gym[accept-rom-license] calls AutoROM to add ROMs to the ALE. You can install the entire suite with the shorthand gym[atari, accept-rom-license]. Note that as described in the name name, by installing gym[accept-rom-license] you are confirming that you have the relevant license to install the ROMs. (@JesseFarebro)
-An accidental breaking change when loading saved policies trained on old versions of Gym with environments using the box action space have been fixed. (@RedTachyon)
-Pendulum has had a minor fix to it's physics logic made and the version has been bumped to v1 (@RedTachyon)
-Tests have been refactored into an orderly manner (@RedTachyon)
-Dict spaces now have standard dict helper methods (@Rohan138)
-Environment properties are now forwarded to the wrapper (@tristandeleu)
-Gym now properly enforces calling reset before stepping for the first time (@ahmedo42)
-Proper piping of error messages to stderr (@XuehaiPan)
-Fix video saving issues (@zlig)

Also, Gym is compiling a list of third party environments to into the new documentation website we're working on. Please submit PRs for ones that are missing: https://github.com/openai/gym/blob/master/docs/third_party_environments.md

v0.20.0: 0.20.0#

Released on 2021-09-14 - GitHub - PyPI

Major Change:

  • Replaced Atari-Py dependency with ALE-Py and bumped all versions. This is a massive upgrade with many changes, please see the full explainer (@JesseFarebro)
  • Note that ALE-Py does not include ROMs. You can install ROMs in two lines of bash with AutoROM though (pip3 install autorom and then autorom), see https://github.com/PettingZoo-Team/AutoROM. This is the recommended approach for CI, etc.

Breaking changes and new features:

  • Add RecordVideo wrapper, deprecate monitor wrapper in favor of it and RecordEpisodeStatistics wrapper (@vwxyzjn)
  • Dependencies used outside of environments (e.g. for wrappers) are now in 'other' extra' (@jkterry1)
  • Moved algorithmic and unused toytext envs (guessing game, hotter colder, nchain, roulette, kellycoinflip) to third party repos (@jkterry1, @Rohan138)
  • Fixed flatten utility and flatdim in MultiDiscrete sapce (@tristandeleu)
  • Add __setitem__ to dict space (@jfpettit)
  • Large fixes to .contains method for box space (@FirefoxMetzger)
  • Made blackjack environment properly comply with Barto and Sutton book standard, bumped to v1 (@RedTachyon)
  • Added NormalizeObservation and NormalizeReward wrappers (@vwxyzjn)
  • Add __getitem__ and __len__ to MultiDiscrete space (@XuehaiPan)
  • Changed .shape to be a property of box space to prevent unexpected behaviors (@RedTachyon)

Bug fixes and upgrades:

  • Video recorder gracefully handles closing (@XuehaiPan)
  • Remaining unnecessary dependencies in setup.py are resolved (@jkterry1)
  • Minor acrobot performance improvements (@TuckerBMorgan)
  • Pendulum properly renders when 0 force is sent (@Olimoyo)
  • Make observations dtypes be consistent with observation space dtypes for all classic control envs and bipedalwalker (@RedTachyon)
  • Removed unused and long depricated features in registration (@Rohan138)
  • Framestack wrapper now inherits from obswrapper (@jfpettit)
  • Seed method for spaces.Tuple and spaces.Dict now properly function, are fully stochastic, are fully featured and behave in the expected manner (@XuehaiPan, @RaghuSpaceRajan)
  • Replace time() with perf_counter() for better measurements of short duration (@zuoxingdong)

0.19.0#

Released on 2021-08-13 - GitHub - PyPI

Gym 0.19.0 is a large maintenance release, and the first since @jkterry1 became the maintainer. There should be no breaking changes in this release.

New features:

  • Added custom datatype argument to multidiscrete space (@m-orsini)
  • API compliance test added based on SB3 and PettingZoo tests (@amtamasi)
  • RecordEpisodeStatics works with VectorEnv (@vwxyzjn)

Bug fixes:

  • Removed unused dependencies, removed unnescesary dependency version requirements that caused installation issues on newer machines, added full requirements.txt and moved general dependencies to extras. Notably, "toy_text" is not a used extra. atari-py is now pegged to a precise working version pending the switch to ale-py (@jkterry1)
  • Bug fixes to rewards in FrozenLake and FrozenLake8x8; versions bumped to v1 (@ZhiqingXiao)
    -Removed remaining numpy depreciation warnings (@super-pirata)
  • Fixes to video recording (@mahiuchun, @zlig)
  • EZ pickle argument fixes (@zzyunzhi, @Indoril007)
  • Other very minor (nonbreaking) fixes

Other:

  • Removed small bits of dead code (@jkterry1)
  • Numerous typo, CI and documentation fixes (mostly @cclauss)
  • New readme and updated third party env list (@jkterry1)
  • Code is now all flake8 compliant through black (@cclauss)

0.12.5: Fixed fetch/slide#

Released on 2019-05-29 - GitHub - PyPI

v0.9.6: Cleanup + Remove Unmaintained Code#

Released on 2018-02-01 - GitHub - PyPI

  • Now your Env and Wrapper subclasses should define step, reset, render, close, seed rather than underscored method names.
  • Removed the board_game, debugging, safety, parameter_tuning environments since they're not being maintained by us at OpenAI. We encourage authors and users to create new repositories for these environments.
  • Changed MultiDiscrete action space to range from [0, ..., n-1] rather than [a, ..., b-1].
  • No more render(close=True), use env-specific methods to close the rendering.
  • Removed scoreboard directory, since site doesn't exist anymore.
  • Moved gym/monitoring to gym/wrappers/monitoring
  • Add dtype to Space.
  • Not using python's built-in module anymore, using gym.logger

v0.9.5#

Released on 2018-01-26 - GitHub - PyPI

v0.7.4#

Released on 2017-03-05 - GitHub - PyPI

v0.7.3: #

Released on 2017-02-01 - GitHub - PyPI