Course Content
Conducting Exploratory Data Analysis of Nobel Prizes
Conducting Exploratory Data Analysis of Nobel Prizes
Motivation Analysis
In this section, our focus will be on examining the text to identify the most prevalent words in our dataset. Initially, we will eliminate all stopwords from the "motivation"
column and modify our data accordingly.
Take, for instance, the sentence: "I like reading, so I read." It will be altered to: "Like Reading Read." Following this transformation, we will visualize these words in a word cloud, where the size of each word reflects its frequency in our dataset.
Task
- Apply a lambda function to remove stopwords from the
'motivation'
column and store the processed text in the'Filtered motivation'
column. - Concatenate all entries in the
"Filtered motivation"
column to form a single text string. - Split the concatenated text into individual words and create a pandas DataFrame from the list of words.
- Calculate word frequency by counting occurrences of each word.
- Plot the 20 most common words using
seaborn
'sbarplot
.
Thanks for your feedback!
In this section, our focus will be on examining the text to identify the most prevalent words in our dataset. Initially, we will eliminate all stopwords from the "motivation"
column and modify our data accordingly.
Take, for instance, the sentence: "I like reading, so I read." It will be altered to: "Like Reading Read." Following this transformation, we will visualize these words in a word cloud, where the size of each word reflects its frequency in our dataset.
Task
- Apply a lambda function to remove stopwords from the
'motivation'
column and store the processed text in the'Filtered motivation'
column. - Concatenate all entries in the
"Filtered motivation"
column to form a single text string. - Split the concatenated text into individual words and create a pandas DataFrame from the list of words.
- Calculate word frequency by counting occurrences of each word.
- Plot the 20 most common words using
seaborn
'sbarplot
.