Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Lernen Visualizing Normalized Vectors | Normalization Techniques
Feature Scaling and Normalization Deep Dive

bookVisualizing Normalized Vectors

Understanding how normalization transforms data in two-dimensional space is essential for grasping its impact on machine learning workflows. When you normalize a set of 2D points, you change not only their scale but often their orientation relative to the origin. Suppose you have several points scattered across a plane — each with its own magnitude (distance from the origin) and direction. Applying normalization, such as L2 normalization, adjusts each point so that it lies on a unit circle centered at the origin. This operation preserves the direction of each vector from the origin but forces all points to have the same magnitude, making them directly comparable regardless of their original scale. Such a transformation is particularly useful when the absolute scale of vectors is irrelevant, but their directions encode meaningful information.

1234567891011121314151617181920212223242526272829
import numpy as np import matplotlib.pyplot as plt # Original 2D points points = np.array([ [3, 4], [1, 7], [5, 2], [6, 9], [8, 1] ]) # Compute L2 norms for each point norms = np.linalg.norm(points, axis=1, keepdims=True) # L2-normalize each point normalized_points = points / norms # Plotting plt.figure(figsize=(7, 7)) plt.scatter(points[:, 0], points[:, 1], color='blue', label='Original Points') plt.scatter(normalized_points[:, 0], normalized_points[:, 1], color='red', label='Normalized Points') plt.xlabel('X') plt.ylabel('Y') plt.title('Original vs. L2-Normalized 2D Points') plt.legend() plt.grid(True) plt.axis('equal') plt.show()
copy
Note
Note

Distance-based algorithms, such as k-nearest neighbors, rely heavily on how distances are measured between data points. If features are on different scales, the algorithm may be biased toward features with larger ranges, distorting similarity assessments. Normalizing vectors ensures that each point contributes equally to distance calculations, making the analysis fair and meaningful regardless of the original feature scales.

question mark

Which of the following statements best describes the effect of L2 normalization as visualized in the plot above?

Select the correct answer

War alles klar?

Wie können wir es verbessern?

Danke für Ihr Feedback!

Abschnitt 2. Kapitel 3

Fragen Sie AI

expand

Fragen Sie AI

ChatGPT

Fragen Sie alles oder probieren Sie eine der vorgeschlagenen Fragen, um unser Gespräch zu beginnen

Suggested prompts:

Can you explain why normalization is important in machine learning?

What would happen if we used a different normalization method, like L1 normalization?

Can you describe how the plot visually demonstrates the effect of normalization?

Awesome!

Completion rate improved to 5.26

bookVisualizing Normalized Vectors

Swipe um das Menü anzuzeigen

Understanding how normalization transforms data in two-dimensional space is essential for grasping its impact on machine learning workflows. When you normalize a set of 2D points, you change not only their scale but often their orientation relative to the origin. Suppose you have several points scattered across a plane — each with its own magnitude (distance from the origin) and direction. Applying normalization, such as L2 normalization, adjusts each point so that it lies on a unit circle centered at the origin. This operation preserves the direction of each vector from the origin but forces all points to have the same magnitude, making them directly comparable regardless of their original scale. Such a transformation is particularly useful when the absolute scale of vectors is irrelevant, but their directions encode meaningful information.

1234567891011121314151617181920212223242526272829
import numpy as np import matplotlib.pyplot as plt # Original 2D points points = np.array([ [3, 4], [1, 7], [5, 2], [6, 9], [8, 1] ]) # Compute L2 norms for each point norms = np.linalg.norm(points, axis=1, keepdims=True) # L2-normalize each point normalized_points = points / norms # Plotting plt.figure(figsize=(7, 7)) plt.scatter(points[:, 0], points[:, 1], color='blue', label='Original Points') plt.scatter(normalized_points[:, 0], normalized_points[:, 1], color='red', label='Normalized Points') plt.xlabel('X') plt.ylabel('Y') plt.title('Original vs. L2-Normalized 2D Points') plt.legend() plt.grid(True) plt.axis('equal') plt.show()
copy
Note
Note

Distance-based algorithms, such as k-nearest neighbors, rely heavily on how distances are measured between data points. If features are on different scales, the algorithm may be biased toward features with larger ranges, distorting similarity assessments. Normalizing vectors ensures that each point contributes equally to distance calculations, making the analysis fair and meaningful regardless of the original feature scales.

question mark

Which of the following statements best describes the effect of L2 normalization as visualized in the plot above?

Select the correct answer

War alles klar?

Wie können wir es verbessern?

Danke für Ihr Feedback!

Abschnitt 2. Kapitel 3
some-alt