Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
学ぶ Modeling Human Preferences: Distributions and Noise | Foundations of Human Feedback and Preferences
Reinforcement Learning from Human Feedback Theory

bookModeling Human Preferences: Distributions and Noise

メニューを表示するにはスワイプしてください

When you seek to align machine learning systems with human values, you must formally represent human preferences. At the most basic level, a preference relation describes when a human prefers one outcome over another. Formally, if you have two options, AA and BB, the relation ABA \succ B means "A is preferred to B." In practice, human choices are rarely deterministic; instead, they exhibit variability due to uncertainty, ambiguity, or other factors. This motivates the use of stochastic choice models, which assign probabilities to each possible choice rather than treating preferences as fixed. For example, you might model the probability that a human prefers AA to BB as P(AB)P(A \succ B), which can be estimated from observed choices.

To capture the full range of possible human behaviors, you introduce the concept of a preference distribution. This distribution describes the likelihood of each possible ranking or selection among a set of options. Such distributions allow you to account for both consistent and inconsistent preferences across different individuals or even within the same individual over time.

question mark

Which statement best describes a preference relation as used in modeling human preferences?

正しい答えを選んでください

すべて明確でしたか?

どのように改善できますか?

フィードバックありがとうございます!

セクション 1.  1

AIに質問する

expand

AIに質問する

ChatGPT

何でも質問するか、提案された質問の1つを試してチャットを始めてください

セクション 1.  1
some-alt