Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Стиснення даних | Аналіз результатів
Метод Головних Компонент
course content

Зміст курсу

Метод Головних Компонент

Метод Головних Компонент

1. Що таке аналіз головних компонент
2. Основні поняття РСА
3. Побудова моделі
4. Аналіз результатів

Стиснення даних

Before dealing with the task of compressing data with PCA, it is important to understand the difference between data compression and dimensionality reduction. Dimensional data reduction is one type of data compression. Data compression methods are divided into 2 main classes: those in which the processed data can then be restored and those in which it is impossible. Data dimensionality reduction is class 2, i.e. after processing the dataset, we will not be able to restore it back to its original form. More precisely, it can be done, but the data will not be the same, it will be an approximation to the original dataset. It is generally accepted that PCA is not a method for saving storage space, but for performing expensive operations to achieve a similar result. Let's get back to the code. We have the option to choose the amount of data variance we want to keep from the initial dataset. The value of the n_components argument must have been between 0 and 1. In this case, if we specify 0.85, that would be 85% of the stored variance.

Завдання

Create a PCA model with 90% variance preserved for the iris dataset:

Завдання

Create a PCA model with 90% variance preserved for the iris dataset:

Все було зрозуміло?

Секція 4. Розділ 3
toggle bottom row

Стиснення даних

Before dealing with the task of compressing data with PCA, it is important to understand the difference between data compression and dimensionality reduction. Dimensional data reduction is one type of data compression. Data compression methods are divided into 2 main classes: those in which the processed data can then be restored and those in which it is impossible. Data dimensionality reduction is class 2, i.e. after processing the dataset, we will not be able to restore it back to its original form. More precisely, it can be done, but the data will not be the same, it will be an approximation to the original dataset. It is generally accepted that PCA is not a method for saving storage space, but for performing expensive operations to achieve a similar result. Let's get back to the code. We have the option to choose the amount of data variance we want to keep from the initial dataset. The value of the n_components argument must have been between 0 and 1. In this case, if we specify 0.85, that would be 85% of the stored variance.

Завдання

Create a PCA model with 90% variance preserved for the iris dataset:

Завдання

Create a PCA model with 90% variance preserved for the iris dataset:

Все було зрозуміло?

Секція 4. Розділ 3
toggle bottom row

Стиснення даних

Before dealing with the task of compressing data with PCA, it is important to understand the difference between data compression and dimensionality reduction. Dimensional data reduction is one type of data compression. Data compression methods are divided into 2 main classes: those in which the processed data can then be restored and those in which it is impossible. Data dimensionality reduction is class 2, i.e. after processing the dataset, we will not be able to restore it back to its original form. More precisely, it can be done, but the data will not be the same, it will be an approximation to the original dataset. It is generally accepted that PCA is not a method for saving storage space, but for performing expensive operations to achieve a similar result. Let's get back to the code. We have the option to choose the amount of data variance we want to keep from the initial dataset. The value of the n_components argument must have been between 0 and 1. In this case, if we specify 0.85, that would be 85% of the stored variance.

Завдання

Create a PCA model with 90% variance preserved for the iris dataset:

Завдання

Create a PCA model with 90% variance preserved for the iris dataset:

Все було зрозуміло?

Before dealing with the task of compressing data with PCA, it is important to understand the difference between data compression and dimensionality reduction. Dimensional data reduction is one type of data compression. Data compression methods are divided into 2 main classes: those in which the processed data can then be restored and those in which it is impossible. Data dimensionality reduction is class 2, i.e. after processing the dataset, we will not be able to restore it back to its original form. More precisely, it can be done, but the data will not be the same, it will be an approximation to the original dataset. It is generally accepted that PCA is not a method for saving storage space, but for performing expensive operations to achieve a similar result. Let's get back to the code. We have the option to choose the amount of data variance we want to keep from the initial dataset. The value of the n_components argument must have been between 0 and 1. In this case, if we specify 0.85, that would be 85% of the stored variance.

Завдання

Create a PCA model with 90% variance preserved for the iris dataset:

Секція 4. Розділ 3
Перейдіть на комп'ютер для реальної практикиПродовжуйте з того місця, де ви зупинились, використовуючи один з наведених нижче варіантів
We're sorry to hear that something went wrong. What happened?
some-alt