Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Вектор ознак та головні компоненти | Основні поняття РСА
Метод Головних Компонент
course content

Зміст курсу

Метод Головних Компонент

Метод Головних Компонент

1. Що таке аналіз головних компонент
2. Основні поняття РСА
3. Побудова моделі
4. Аналіз результатів

Вектор ознак та головні компоненти

After we have our main components, we need to create a feature vector. Why do we need this new variable? At this stage, we decide whether to keep all components or discard those that have the least value. The feature vector is just a matrix of vectors from the remaining most significant components.

Thus, the creation of the feature vector is exactly the stage at which dataset dimensionality reduction occurs, because if we decide to keep only p principal components out of n, the final dataset will have only p dimensions.

We can reduce a matrix with 2 components to 1 component:

Finally, we have the main components and we can transform our data, i.e. reorient the data from the original axes to those represented by the principal components. This is implemented very simply by multiplying the feature vector by standardized data (the matrices must be transposed):

Quiz

From which dimension to which was the dataset in the image transferred?

Choose the correct option.

Виберіть правильну відповідь

Все було зрозуміло?

Секція 2. Розділ 4
We're sorry to hear that something went wrong. What happened?
some-alt