Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Lære Challenge: Modify Exploration Rate | Classic RL Algorithms: Q-learning & SARSA
Hands-On Classic RL Algorithms with Python
Seksjon 1. Kapittel 4
single

single

Challenge: Modify Exploration Rate

Sveip for å vise menyen

Oppgave

Sveip for å begynne å kode

Modify the Q-learning implementation to use the exploration_rate parameter for controlling action selection during training. This challenge builds on your previous work with Q-learning by introducing the concept of exploration versus exploitation.

  • Use the exploration_rate argument to determine whether to select a random action or the best-known action at each step.
  • When a random value is less than exploration_rate, select a random action.
  • Otherwise, select the action with the highest value from the Q-table for the current state.
  • Ensure the rest of the Q-learning algorithm remains unchanged.

Løsning

Switch to desktopBytt til skrivebordet for virkelighetspraksisFortsett der du er med et av alternativene nedenfor
Alt var klart?

Hvordan kan vi forbedre det?

Takk for tilbakemeldingene dine!

Seksjon 1. Kapittel 4
single

single

Spør AI

expand

Spør AI

ChatGPT

Spør om hva du vil, eller prøv ett av de foreslåtte spørsmålene for å starte chatten vår

some-alt