Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Learn Challenge: Q-table Update with SARSA | Classic RL Algorithms: Q-learning & SARSA
Hands-On Classic RL Algorithms with Python
Section 1. Chapter 7
single

single

Challenge: Q-table Update with SARSA

Swipe to show menu

Task

Swipe to start coding

Given a sequence of state-action pairs, update the Q-table using the SARSA rule.

You are provided with a Q-table, a sequence of (state, action) pairs, a learning rate (alpha), a discount factor (gamma), and a list of rewards received after each transition.

  • For each consecutive pair in the state-action sequence, update the Q-value for the current (state, action) using the SARSA update rule.
  • Use the corresponding reward for each state-action transition.
  • Do not update the final state-action pair, as there is no next state-action following it.
  • Apply the SARSA update: Q[state, action] = Q[state, action] + alpha * (reward + gamma * Q[next_state, next_action] - Q[state, action]).

Solution

Switch to desktopSwitch to desktop for real-world practiceContinue from where you are using one of the options below
Everything was clear?

How can we improve it?

Thanks for your feedback!

Section 1. Chapter 7
single

single

Ask AI

expand

Ask AI

ChatGPT

Ask anything or try one of the suggested questions to begin our chat

some-alt