Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Apprendre Effects on Vector Magnitude and Direction | Normalization Techniques
Feature Scaling and Normalization Deep Dive

bookEffects on Vector Magnitude and Direction

When you normalize a feature vector, you change its magnitude — also called its length — without altering its direction in feature space. This means that while the vector will still point in the same orientation, its size will be scaled according to the normalization method you use. For many machine learning algorithms, especially those relying on distances or dot products, the magnitude of vectors can heavily influence results. By normalizing vectors, you ensure that each one contributes equally, regardless of its original scale.

Consider a two-dimensional feature vector. If you apply L2 normalization, you rescale the vector so its length becomes exactly one, but the angle it makes with the axes remains unchanged. This process projects all vectors onto the surface of a unit hypersphere, preserving their direction but standardizing their length. The result is that no single feature dominates due to its scale, and comparisons between vectors become more meaningful.

123456789101112131415161718192021222324252627282930313233343536373839
import numpy as np import matplotlib.pyplot as plt # Original vectors vectors = np.array([ [3, 4], [1, 7], [6, 2] ]) # L2 normalization def l2_normalize(v): norm = np.linalg.norm(v) return v / norm if norm != 0 else v normalized_vectors = np.array([l2_normalize(v) for v in vectors]) # Plotting fig, ax = plt.subplots(figsize=(7, 7)) origin = np.zeros(2) # Plot original vectors for v in vectors: ax.arrow(*origin, *v, head_width=0.2, head_length=0.3, fc='blue', ec='blue', alpha=0.5, length_includes_head=True) ax.text(v[0]*1.05, v[1]*1.05, f"{v}", color='blue') # Plot normalized vectors for v in normalized_vectors: ax.arrow(*origin, *v, head_width=0.08, head_length=0.12, fc='red', ec='red', alpha=0.8, length_includes_head=True) ax.text(v[0]*1.1, v[1]*1.1, f"{np.round(v,2)}", color='red') ax.set_xlim(-1, 8) ax.set_ylim(-1, 8) ax.set_aspect('equal') ax.grid(True) ax.set_title("Vectors Before (blue) and After (red) L2 Normalization") plt.xlabel("Feature 1") plt.ylabel("Feature 2") plt.show()
copy
Note
Definition

A vector has a unit norm when its length (or magnitude) is exactly one. In L2 normalization, every vector is rescaled to have a unit norm. This is significant because it ensures all vectors are on the same scale, making comparisons based on direction rather than magnitude.

question mark

What is the main advantage of normalizing feature vectors in machine learning algorithms?

Select the correct answer

Tout était clair ?

Comment pouvons-nous l'améliorer ?

Merci pour vos commentaires !

Section 2. Chapitre 2

Demandez à l'IA

expand

Demandez à l'IA

ChatGPT

Posez n'importe quelle question ou essayez l'une des questions suggérées pour commencer notre discussion

Awesome!

Completion rate improved to 5.26

bookEffects on Vector Magnitude and Direction

Glissez pour afficher le menu

When you normalize a feature vector, you change its magnitude — also called its length — without altering its direction in feature space. This means that while the vector will still point in the same orientation, its size will be scaled according to the normalization method you use. For many machine learning algorithms, especially those relying on distances or dot products, the magnitude of vectors can heavily influence results. By normalizing vectors, you ensure that each one contributes equally, regardless of its original scale.

Consider a two-dimensional feature vector. If you apply L2 normalization, you rescale the vector so its length becomes exactly one, but the angle it makes with the axes remains unchanged. This process projects all vectors onto the surface of a unit hypersphere, preserving their direction but standardizing their length. The result is that no single feature dominates due to its scale, and comparisons between vectors become more meaningful.

123456789101112131415161718192021222324252627282930313233343536373839
import numpy as np import matplotlib.pyplot as plt # Original vectors vectors = np.array([ [3, 4], [1, 7], [6, 2] ]) # L2 normalization def l2_normalize(v): norm = np.linalg.norm(v) return v / norm if norm != 0 else v normalized_vectors = np.array([l2_normalize(v) for v in vectors]) # Plotting fig, ax = plt.subplots(figsize=(7, 7)) origin = np.zeros(2) # Plot original vectors for v in vectors: ax.arrow(*origin, *v, head_width=0.2, head_length=0.3, fc='blue', ec='blue', alpha=0.5, length_includes_head=True) ax.text(v[0]*1.05, v[1]*1.05, f"{v}", color='blue') # Plot normalized vectors for v in normalized_vectors: ax.arrow(*origin, *v, head_width=0.08, head_length=0.12, fc='red', ec='red', alpha=0.8, length_includes_head=True) ax.text(v[0]*1.1, v[1]*1.1, f"{np.round(v,2)}", color='red') ax.set_xlim(-1, 8) ax.set_ylim(-1, 8) ax.set_aspect('equal') ax.grid(True) ax.set_title("Vectors Before (blue) and After (red) L2 Normalization") plt.xlabel("Feature 1") plt.ylabel("Feature 2") plt.show()
copy
Note
Definition

A vector has a unit norm when its length (or magnitude) is exactly one. In L2 normalization, every vector is rescaled to have a unit norm. This is significant because it ensures all vectors are on the same scale, making comparisons based on direction rather than magnitude.

question mark

What is the main advantage of normalizing feature vectors in machine learning algorithms?

Select the correct answer

Tout était clair ?

Comment pouvons-nous l'améliorer ?

Merci pour vos commentaires !

Section 2. Chapitre 2
some-alt