Course Content
Introduction to NLP
Introduction to NLP
Challenge: Creating Word Embeddings
Swipe to show code editor
Now, it's time for you to train a Word2Vec model to generate word embeddings for the given corpus:
- Import the class for creating a Word2Vec model.
- Tokenize each sentence in the
'Document'
column of thecorpus
by splitting each sentence into words separated by whitespaces. Store the result in thesentences
variable. - Initialize the Word2Vec model by passing
sentences
as the first argument and setting the following values as keyword arguments, in this order:- embedding size: 50;
- context window size: 2;
- minimal frequency of words to include in the model: 1;
- model: skip-gram.
- Print the top-3 most similar words to the word 'bowl'.
Thanks for your feedback!
Challenge: Creating Word Embeddings
Swipe to show code editor
Now, it's time for you to train a Word2Vec model to generate word embeddings for the given corpus:
- Import the class for creating a Word2Vec model.
- Tokenize each sentence in the
'Document'
column of thecorpus
by splitting each sentence into words separated by whitespaces. Store the result in thesentences
variable. - Initialize the Word2Vec model by passing
sentences
as the first argument and setting the following values as keyword arguments, in this order:- embedding size: 50;
- context window size: 2;
- minimal frequency of words to include in the model: 1;
- model: skip-gram.
- Print the top-3 most similar words to the word 'bowl'.
Thanks for your feedback!
Challenge: Creating Word Embeddings
Swipe to show code editor
Now, it's time for you to train a Word2Vec model to generate word embeddings for the given corpus:
- Import the class for creating a Word2Vec model.
- Tokenize each sentence in the
'Document'
column of thecorpus
by splitting each sentence into words separated by whitespaces. Store the result in thesentences
variable. - Initialize the Word2Vec model by passing
sentences
as the first argument and setting the following values as keyword arguments, in this order:- embedding size: 50;
- context window size: 2;
- minimal frequency of words to include in the model: 1;
- model: skip-gram.
- Print the top-3 most similar words to the word 'bowl'.
Thanks for your feedback!
Swipe to show code editor
Now, it's time for you to train a Word2Vec model to generate word embeddings for the given corpus:
- Import the class for creating a Word2Vec model.
- Tokenize each sentence in the
'Document'
column of thecorpus
by splitting each sentence into words separated by whitespaces. Store the result in thesentences
variable. - Initialize the Word2Vec model by passing
sentences
as the first argument and setting the following values as keyword arguments, in this order:- embedding size: 50;
- context window size: 2;
- minimal frequency of words to include in the model: 1;
- model: skip-gram.
- Print the top-3 most similar words to the word 'bowl'.