seldonian.RL.Agents.Discrete_Random_Agent.Discrete_Random_Agent¶
- class Discrete_Random_Agent(env_description)¶
Bases:
Agent
- __init__(env_description)¶
An agent that acts on discrete observation and action spaces. Picks actions according to uniform random policy. Is not capable of learning.
- Parameters:
env_description (
Env_Description
) – an object for accessing attributes of the environment
- __repr__()¶
Return repr(self).
Methods
- choose_action(observation)¶
Choose action from discrete uniform random distribution. Does not use observation to inform action.
- Parameters:
observation – Environment-specific observation.
- get_params()¶
Retrieve the parameters of the agent’s policy
- get_policy()¶
Retrieve the agent’s policy object
- get_prob_this_action(observation, action)¶
Get uniform random probability. Does not use observation or action.
- Parameters:
observation – The current observation of the agent.
observation – The action taken by the agent from this observation.
- set_new_params(theta)¶
Update the parameters of the agent’s policy to theta.
- Parameters:
theta – policy parameters
- update(observation, next_observation, reward, terminated)¶
Updates agent’s parameters according to the learning rule. Not used in this agent. To be overriden