Action Wrappers#
Action Wrapper#
- class gymnasium.ActionWrapper(env: Env)#
Superclass of wrappers that can modify the action before
env.step().If you would like to apply a function to the action before passing it to the base environment, you can simply inherit from
ActionWrapperand overwrite the methodaction()to implement that transformation. The transformation defined in that method must take values in the base environment’s action space. However, its domain might differ from the original action space. In that case, you need to specify the new action space of the wrapper by settingself.action_spacein the__init__()method of your wrapper.Let’s say you have an environment with action space of type
gymnasium.spaces.Box, but you would only like to use a finite subset of actions. Then, you might want to implement the following wrapper:class DiscreteActions(gymnasium.ActionWrapper): def __init__(self, env, disc_to_cont): super().__init__(env) self.disc_to_cont = disc_to_cont self.action_space = Discrete(len(disc_to_cont)) def action(self, act): return self.disc_to_cont[act] if __name__ == "__main__": env = gymnasium.make("LunarLanderContinuous-v2") wrapped_env = DiscreteActions(env, [np.array([1,0]), np.array([-1,0]), np.array([0,1]), np.array([0,-1])]) print(wrapped_env.action_space) #Discrete(4)
Among others, Gymnasium provides the action wrappers
ClipActionandRescaleActionfor clipping and rescaling actions.Wraps an environment to allow a modular transformation of the
step()andreset()methods.- Parameters:
env – The environment to wrap
- action(self, action)#
Returns a modified action before
env.step()is called.- Parameters:
action – The original
step()actions- Returns:
The modified actions
Clip Action#
- class gymnasium.wrappers.ClipAction(env: Env)#
Clip the continuous action within the valid
Boxobservation space bound.Example
>>> import gymnasium as gym >>> env = gym.make('Bipedal-Walker-v3') >>> env = ClipAction(env) >>> env.action_space Box(-1.0, 1.0, (4,), float32) >>> env.step(np.array([5.0, 2.0, -10.0, 0.0])) # Executes the action np.array([1.0, 1.0, -1.0, 0]) in the base environment
A wrapper for clipping continuous actions within the valid bound.
- Parameters:
env – The environment to apply the wrapper
Rescale Action#
- class gymnasium.wrappers.RescaleAction(env: Env, min_action: Union[float, int, ndarray], max_action: Union[float, int, ndarray])#
Affinely rescales the continuous action space of the environment to the range [min_action, max_action].
The base environment
envmust have an action space of typespaces.Box. Ifmin_actionormax_actionare numpy arrays, the shape must match the shape of the environment’s action space.Example
>>> import gymnasium as gym >>> env = gym.make('BipedalWalker-v3') >>> env.action_space Box(-1.0, 1.0, (4,), float32) >>> min_action = -0.5 >>> max_action = np.array([0.0, 0.5, 1.0, 0.75]) >>> env = RescaleAction(env, min_action=min_action, max_action=max_action) >>> env.action_space Box(-0.5, [0. 0.5 1. 0.75], (4,), float32) >>> RescaleAction(env, min_action, max_action).action_space == gym.spaces.Box(min_action, max_action) True
Initializes the
RescaleActionwrapper.- Parameters:
env (Env) – The environment to apply the wrapper
min_action (float, int or np.ndarray) – The min values for each action. This may be a numpy array or a scalar.
max_action (float, int or np.ndarray) – The max values for each action. This may be a numpy array or a scalar.