Policy Improvement
Swipe to show menu
Policy improvement is a process of improving the policy based on current value function estimates.
Like with policy evaluation, policy improvement can work with both state value function and action value function. But for DP methods, state value function will be used.
Now that you can estimate state value function for any policy, a natural next step is to explore whether there are any policies better than the current one. One way of doing this, is to consider taking a different action a in a state s, and to follow the current policy afterwards. If this sounds familiar, it's because this is similar to how we define the action value function:
qฯโ(s,a)=sโฒ,rโโp(sโฒ,rโฃs,a)(r+ฮณvฯโ(sโฒ))If this new value is greater than the original state value vฯโ(s), it indicates that taking action a in state s and then continuing with policy ฯ leads to better outcomes than strictly following policy ฯ. Since states are independent, it's optimal to always select action a whenever state s is encountered. Therefore, we can construct an improved policy ฯโฒ, identical to ฯ except that it selects action a in state s, which would be superior to the original policy ฯ.
Policy Improvement Theorem
The reasoning described above can be generalized as the policy improvement theorem:
โนโqฯโ(s,ฯโฒ(s))โฅvฯโ(s)vฯโฒโ(s)โฅvฯโ(s)โโsโSโsโSโThe proof of this theorem is relatively simple, and can be achieved by a repeated substitution:
vฯโ(s)โโคqฯโ(s,ฯโฒ(s))=Eฯโฒโ[Rt+1โ+ฮณvฯโ(St+1โ)โฃStโ=s]โคEฯโฒโ[Rt+1โ+ฮณqฯโ(St+1โ,ฯโฒ(St+1โ))โฃStโ=s]=Eฯโฒโ[Rt+1โ+ฮณEฯโฒโ[Rt+2โ+ฮณvฯโ(St+2โ)]โฃStโ=s]=Eฯโฒโ[Rt+1โ+ฮณRt+2โ+ฮณ2vฯโ(St+2โ)โฃStโ=s]...โคEฯโฒโ[Rt+1โ+ฮณRt+2โ+ฮณ2Rt+3โ+...โฃStโ=s]=vฯโฒโ(s)โImprovement Strategy
While updating actions for certain states can lead to improvements, it's more effective to update actions for all states simultaneously. Specifically, for each state s, select the action a that maximizes the action value qฯโ(s,a):
ฯโฒ(s)โโaargmaxโqฯโ(s,a)โaargmaxโsโฒ,rโโp(sโฒ,rโฃs,a)(r+ฮณvฯโ(sโฒ))โwhere argmax (short for argument of the maximum) is an operator that returns the value of the variable that maximizes a given function.
The resulting greedy policy, denoted by ฯโฒ, satisfies the conditions of the policy improvement theorem by construction, guaranteeing that ฯโฒ is at least as good as the original policy ฯ, and typically better.
If ฯโฒ is as good as, but not better than ฯ, then both ฯโฒ and ฯ are optimal policies, as their value functions are equal, and satisfy Bellman optimality equation:
vฯโ(s)=amaxโsโฒ,rโโp(sโฒ,rโฃs,a)(r+ฮณvฯโ(sโฒ))Thanks for your feedback!
Ask AI
Ask AI
Ask anything or try one of the suggested questions to begin our chat