Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Apprendre Challenge: Implementing Multi-Head Attention | Section
Transformer Architecture

bookChallenge: Implementing Multi-Head Attention

Glissez pour afficher le menu

Task

You have all the building blocks: scaled dot-product attention from the previous challenge, and the intuition behind multiple heads from the last chapter. Now put them together.

Implement a MultiHeadAttention module as an nn.Module class. It should:

  1. Accept d_model and num_heads as constructor arguments – assert that d_model % num_heads == 0;
  2. Define separate linear projections for Q, K, V, and a final output projection;
  3. In forward(x), split the projections into num_heads heads of dimension d_model // num_heads;
  4. Run scaled dot-product attention independently per head;
  5. Concatenate the head outputs and pass through the output projection.

Implement the module locally.

Tout était clair ?

Comment pouvons-nous l'améliorer ?

Merci pour vos commentaires !

Section 1. Chapitre 5

Demandez à l'IA

expand

Demandez à l'IA

ChatGPT

Posez n'importe quelle question ou essayez l'une des questions suggérées pour commencer notre discussion

Section 1. Chapitre 5
some-alt