Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Apprendre Challenge: Q-table Update with SARSA | Classic RL Algorithms: Q-learning & SARSA
Hands-On Classic RL Algorithms with Python
Section 1. Chapitre 7
single

single

Challenge: Q-table Update with SARSA

Glissez pour afficher le menu

Tâche

Glissez pour commencer à coder

Given a sequence of state-action pairs, update the Q-table using the SARSA rule.

You are provided with a Q-table, a sequence of (state, action) pairs, a learning rate (alpha), a discount factor (gamma), and a list of rewards received after each transition.

  • For each consecutive pair in the state-action sequence, update the Q-value for the current (state, action) using the SARSA update rule.
  • Use the corresponding reward for each state-action transition.
  • Do not update the final state-action pair, as there is no next state-action following it.
  • Apply the SARSA update: Q[state, action] = Q[state, action] + alpha * (reward + gamma * Q[next_state, next_action] - Q[state, action]).

Solution

Switch to desktopPassez à un bureau pour une pratique réelleContinuez d'où vous êtes en utilisant l'une des options ci-dessous
Tout était clair ?

Comment pouvons-nous l'améliorer ?

Merci pour vos commentaires !

Section 1. Chapitre 7
single

single

Demandez à l'IA

expand

Demandez à l'IA

ChatGPT

Posez n'importe quelle question ou essayez l'une des questions suggérées pour commencer notre discussion

some-alt