Effects on Vector Magnitude and Direction
When you normalize a feature vector, you change its magnitude — also called its length — without altering its direction in feature space. This means that while the vector will still point in the same orientation, its size will be scaled according to the normalization method you use. For many machine learning algorithms, especially those relying on distances or dot products, the magnitude of vectors can heavily influence results. By normalizing vectors, you ensure that each one contributes equally, regardless of its original scale.
Consider a two-dimensional feature vector. If you apply L2 normalization, you rescale the vector so its length becomes exactly one, but the angle it makes with the axes remains unchanged. This process projects all vectors onto the surface of a unit hypersphere, preserving their direction but standardizing their length. The result is that no single feature dominates due to its scale, and comparisons between vectors become more meaningful.
123456789101112131415161718192021222324252627282930313233343536373839import numpy as np import matplotlib.pyplot as plt # Original vectors vectors = np.array([ [3, 4], [1, 7], [6, 2] ]) # L2 normalization def l2_normalize(v): norm = np.linalg.norm(v) return v / norm if norm != 0 else v normalized_vectors = np.array([l2_normalize(v) for v in vectors]) # Plotting fig, ax = plt.subplots(figsize=(7, 7)) origin = np.zeros(2) # Plot original vectors for v in vectors: ax.arrow(*origin, *v, head_width=0.2, head_length=0.3, fc='blue', ec='blue', alpha=0.5, length_includes_head=True) ax.text(v[0]*1.05, v[1]*1.05, f"{v}", color='blue') # Plot normalized vectors for v in normalized_vectors: ax.arrow(*origin, *v, head_width=0.08, head_length=0.12, fc='red', ec='red', alpha=0.8, length_includes_head=True) ax.text(v[0]*1.1, v[1]*1.1, f"{np.round(v,2)}", color='red') ax.set_xlim(-1, 8) ax.set_ylim(-1, 8) ax.set_aspect('equal') ax.grid(True) ax.set_title("Vectors Before (blue) and After (red) L2 Normalization") plt.xlabel("Feature 1") plt.ylabel("Feature 2") plt.show()
A vector has a unit norm when its length (or magnitude) is exactly one. In L2 normalization, every vector is rescaled to have a unit norm. This is significant because it ensures all vectors are on the same scale, making comparisons based on direction rather than magnitude.
Tack för dina kommentarer!
Fråga AI
Fråga AI
Fråga vad du vill eller prova någon av de föreslagna frågorna för att starta vårt samtal
Can you explain the difference between L1 and L2 normalization?
Why is normalization important in machine learning?
Can you show how the normalized vectors are calculated step by step?
Awesome!
Completion rate improved to 5.26
Effects on Vector Magnitude and Direction
Svep för att visa menyn
When you normalize a feature vector, you change its magnitude — also called its length — without altering its direction in feature space. This means that while the vector will still point in the same orientation, its size will be scaled according to the normalization method you use. For many machine learning algorithms, especially those relying on distances or dot products, the magnitude of vectors can heavily influence results. By normalizing vectors, you ensure that each one contributes equally, regardless of its original scale.
Consider a two-dimensional feature vector. If you apply L2 normalization, you rescale the vector so its length becomes exactly one, but the angle it makes with the axes remains unchanged. This process projects all vectors onto the surface of a unit hypersphere, preserving their direction but standardizing their length. The result is that no single feature dominates due to its scale, and comparisons between vectors become more meaningful.
123456789101112131415161718192021222324252627282930313233343536373839import numpy as np import matplotlib.pyplot as plt # Original vectors vectors = np.array([ [3, 4], [1, 7], [6, 2] ]) # L2 normalization def l2_normalize(v): norm = np.linalg.norm(v) return v / norm if norm != 0 else v normalized_vectors = np.array([l2_normalize(v) for v in vectors]) # Plotting fig, ax = plt.subplots(figsize=(7, 7)) origin = np.zeros(2) # Plot original vectors for v in vectors: ax.arrow(*origin, *v, head_width=0.2, head_length=0.3, fc='blue', ec='blue', alpha=0.5, length_includes_head=True) ax.text(v[0]*1.05, v[1]*1.05, f"{v}", color='blue') # Plot normalized vectors for v in normalized_vectors: ax.arrow(*origin, *v, head_width=0.08, head_length=0.12, fc='red', ec='red', alpha=0.8, length_includes_head=True) ax.text(v[0]*1.1, v[1]*1.1, f"{np.round(v,2)}", color='red') ax.set_xlim(-1, 8) ax.set_ylim(-1, 8) ax.set_aspect('equal') ax.grid(True) ax.set_title("Vectors Before (blue) and After (red) L2 Normalization") plt.xlabel("Feature 1") plt.ylabel("Feature 2") plt.show()
A vector has a unit norm when its length (or magnitude) is exactly one. In L2 normalization, every vector is rescaled to have a unit norm. This is significant because it ensures all vectors are on the same scale, making comparisons based on direction rather than magnitude.
Tack för dina kommentarer!