Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Вивчайте Comparing Model Performance Before and After PCA | Section
Principal Component Analysis Fundamentals

bookComparing Model Performance Before and After PCA

Свайпніть щоб показати меню

PCA can be used as a preprocessing step before training machine learning models. In this chapter, you will compare the performance of a LogisticRegression classifier on the original standardized data and on data reduced to two principal components. This practical approach highlights how dimensionality reduction can impact both the effectiveness and efficiency of your models.

1234567891011121314151617181920212223242526272829
from sklearn.datasets import load_iris from sklearn.preprocessing import StandardScaler from sklearn.decomposition import PCA from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score data = load_iris() X = data.data scaler = StandardScaler() X_scaled = scaler.fit_transform(X) X_train, X_test, y_train, y_test = train_test_split(X_scaled, data.target, test_size=0.3, random_state=42) clf_orig = LogisticRegression(max_iter=200) clf_orig.fit(X_train, y_train) y_pred_orig = clf_orig.predict(X_test) acc_orig = accuracy_score(y_test, y_pred_orig) pca = PCA(n_components=2) X_train_pca = pca.fit_transform(X_train) X_test_pca = pca.transform(X_test) clf_pca = LogisticRegression(max_iter=200) clf_pca.fit(X_train_pca, y_train) y_pred_pca = clf_pca.predict(X_test_pca) acc_pca = accuracy_score(y_test, y_pred_pca) print(f"Accuracy on original data: {acc_orig:.2f}") print(f"Accuracy after PCA (2 components): {acc_pca:.2f}")
copy

The code above splits the data, trains a logistic regression model on both the original and PCA-reduced data, and compares their accuracies. Notice that a perfect accuracy of 1.0 on the original data may indicate overfitting, where the model fits the training data too closely and may not generalize well. Applying PCA reduces dimensionality, which can help mitigate overfitting. After PCA, accuracy drops slightly to 0.91, showing a better balance between performance and generalization, with increased speed and interpretability.

question mark

What is a likely outcome of applying PCA to reduce features before training a classifier, as seen in the example above?

Виберіть правильну відповідь

Все було зрозуміло?

Як ми можемо покращити це?

Дякуємо за ваш відгук!

Секція 1. Розділ 12

Запитати АІ

expand

Запитати АІ

ChatGPT

Запитайте про що завгодно або спробуйте одне із запропонованих запитань, щоб почати наш чат

Секція 1. Розділ 12
some-alt