Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Вивчайте Вектор ознак та головні компоненти | Основні поняття РСА
Метод Головних Компонент

bookВектор ознак та головні компоненти

After we have our main components, we need to create a feature vector. Why do we need this new variable? At this stage, we decide whether to keep all components or discard those that have the least value. The feature vector is just a matrix of vectors from the remaining most significant components.

Thus, the creation of the feature vector is exactly the stage at which dataset dimensionality reduction occurs, because if we decide to keep only p principal components out of n, the final dataset will have only p dimensions.

We can reduce a matrix with 2 components to 1 component:

Finally, we have the main components and we can transform our data, i.e. reorient the data from the original axes to those represented by the principal components. This is implemented very simply by multiplying the feature vector by standardized data (the matrices must be transposed):

Quiz

From which dimension to which was the dataset in the image transferred?

question mark

Choose the correct option.

Select the correct answer

Все було зрозуміло?

Як ми можемо покращити це?

Дякуємо за ваш відгук!

Секція 2. Розділ 4

Запитати АІ

expand

Запитати АІ

ChatGPT

Запитайте про що завгодно або спробуйте одне із запропонованих запитань, щоб почати наш чат

Suggested prompts:

Запитайте мені питання про цей предмет

Сумаризуйте цей розділ

Покажіть реальні приклади

Awesome!

Completion rate improved to 5.26

bookВектор ознак та головні компоненти

Свайпніть щоб показати меню

After we have our main components, we need to create a feature vector. Why do we need this new variable? At this stage, we decide whether to keep all components or discard those that have the least value. The feature vector is just a matrix of vectors from the remaining most significant components.

Thus, the creation of the feature vector is exactly the stage at which dataset dimensionality reduction occurs, because if we decide to keep only p principal components out of n, the final dataset will have only p dimensions.

We can reduce a matrix with 2 components to 1 component:

Finally, we have the main components and we can transform our data, i.e. reorient the data from the original axes to those represented by the principal components. This is implemented very simply by multiplying the feature vector by standardized data (the matrices must be transposed):

Quiz

From which dimension to which was the dataset in the image transferred?

question mark

Choose the correct option.

Select the correct answer

Все було зрозуміло?

Як ми можемо покращити це?

Дякуємо за ваш відгук!

Секція 2. Розділ 4
some-alt