Conteúdo do Curso
Introduction to NLP
Introduction to NLP
Lemmatization
Lemmatization vs Stemming
First, let's define what lemmatization is and how it is different from stemming.
Unlike stemming, which crudely chops off word endings, lemmatization considers the context and converts the word to its meaningful base form. For example, 'am', 'are', and 'is' are all lemmatized into 'be'. This approach can significantly reduce the size of the vocabulary (the number of unique words) in large text corpora, thereby increasing efficiency when training models.
On the other hand, while lemmatization is more accurate, it is also more computationally expensive and can be time-consuming with large datasets. Furthermore, for even better accuracy, performing morphological analysis and part of speech tagging is recommended before lemmatization.
Lemmatization with NLTK
The WordNet Lemmatizer, provided by the NLTK library, leverages the WordNet corpus to perform lemmatization.
When you use the WordNet Lemmatizer, it looks up the target word in the WordNet database to find the most appropriate lemma (base form) of the word.
As mentioned above, because words can have different meanings in different contexts (e.g., "running" as a verb vs. "running" as a noun), the lemmatizer may require you to specify the part of speech (e.g., verb, noun, adjective). This helps it select the correct lemma based on the word's role in a sentence.
Let's now take a look at an example:
from nltk.stem import WordNetLemmatizer import nltk # Download the WordNet corpus nltk.download('wordnet') # Initialize the WordNet lemmatizer lemmatizer = WordNetLemmatizer() words = ["running", "bested"] # Lemmatize words lemmatized_words = [lemmatizer.lemmatize(word, 'v') for word in words] # 'v' for verb print("Lemmatized words:", lemmatized_words)
As you can see, from the coding perspective, the approach is rather straightforward. Once again, in order to lemmatize words accurately across an entire corpus, it would be best to first perform part-of-speech (POS) tagging, which we will cover in the following chapter.
Swipe to show code editor
Your task is to lemmatize the tokens, given the text
string. Tokenization is already applied with stop words filtered out.
-
Import the WordNet lemmatizer.
-
Download the WordNet corpus.
-
Initialize the WordNet lemmatizer.
-
Lemmatize the tokens using list comprehension.
Obrigado pelo seu feedback!
Lemmatization
Lemmatization vs Stemming
First, let's define what lemmatization is and how it is different from stemming.
Unlike stemming, which crudely chops off word endings, lemmatization considers the context and converts the word to its meaningful base form. For example, 'am', 'are', and 'is' are all lemmatized into 'be'. This approach can significantly reduce the size of the vocabulary (the number of unique words) in large text corpora, thereby increasing efficiency when training models.
On the other hand, while lemmatization is more accurate, it is also more computationally expensive and can be time-consuming with large datasets. Furthermore, for even better accuracy, performing morphological analysis and part of speech tagging is recommended before lemmatization.
Lemmatization with NLTK
The WordNet Lemmatizer, provided by the NLTK library, leverages the WordNet corpus to perform lemmatization.
When you use the WordNet Lemmatizer, it looks up the target word in the WordNet database to find the most appropriate lemma (base form) of the word.
As mentioned above, because words can have different meanings in different contexts (e.g., "running" as a verb vs. "running" as a noun), the lemmatizer may require you to specify the part of speech (e.g., verb, noun, adjective). This helps it select the correct lemma based on the word's role in a sentence.
Let's now take a look at an example:
from nltk.stem import WordNetLemmatizer import nltk # Download the WordNet corpus nltk.download('wordnet') # Initialize the WordNet lemmatizer lemmatizer = WordNetLemmatizer() words = ["running", "bested"] # Lemmatize words lemmatized_words = [lemmatizer.lemmatize(word, 'v') for word in words] # 'v' for verb print("Lemmatized words:", lemmatized_words)
As you can see, from the coding perspective, the approach is rather straightforward. Once again, in order to lemmatize words accurately across an entire corpus, it would be best to first perform part-of-speech (POS) tagging, which we will cover in the following chapter.
Swipe to show code editor
Your task is to lemmatize the tokens, given the text
string. Tokenization is already applied with stop words filtered out.
-
Import the WordNet lemmatizer.
-
Download the WordNet corpus.
-
Initialize the WordNet lemmatizer.
-
Lemmatize the tokens using list comprehension.
Obrigado pelo seu feedback!
Lemmatization
Lemmatization vs Stemming
First, let's define what lemmatization is and how it is different from stemming.
Unlike stemming, which crudely chops off word endings, lemmatization considers the context and converts the word to its meaningful base form. For example, 'am', 'are', and 'is' are all lemmatized into 'be'. This approach can significantly reduce the size of the vocabulary (the number of unique words) in large text corpora, thereby increasing efficiency when training models.
On the other hand, while lemmatization is more accurate, it is also more computationally expensive and can be time-consuming with large datasets. Furthermore, for even better accuracy, performing morphological analysis and part of speech tagging is recommended before lemmatization.
Lemmatization with NLTK
The WordNet Lemmatizer, provided by the NLTK library, leverages the WordNet corpus to perform lemmatization.
When you use the WordNet Lemmatizer, it looks up the target word in the WordNet database to find the most appropriate lemma (base form) of the word.
As mentioned above, because words can have different meanings in different contexts (e.g., "running" as a verb vs. "running" as a noun), the lemmatizer may require you to specify the part of speech (e.g., verb, noun, adjective). This helps it select the correct lemma based on the word's role in a sentence.
Let's now take a look at an example:
from nltk.stem import WordNetLemmatizer import nltk # Download the WordNet corpus nltk.download('wordnet') # Initialize the WordNet lemmatizer lemmatizer = WordNetLemmatizer() words = ["running", "bested"] # Lemmatize words lemmatized_words = [lemmatizer.lemmatize(word, 'v') for word in words] # 'v' for verb print("Lemmatized words:", lemmatized_words)
As you can see, from the coding perspective, the approach is rather straightforward. Once again, in order to lemmatize words accurately across an entire corpus, it would be best to first perform part-of-speech (POS) tagging, which we will cover in the following chapter.
Swipe to show code editor
Your task is to lemmatize the tokens, given the text
string. Tokenization is already applied with stop words filtered out.
-
Import the WordNet lemmatizer.
-
Download the WordNet corpus.
-
Initialize the WordNet lemmatizer.
-
Lemmatize the tokens using list comprehension.
Obrigado pelo seu feedback!
Lemmatization vs Stemming
First, let's define what lemmatization is and how it is different from stemming.
Unlike stemming, which crudely chops off word endings, lemmatization considers the context and converts the word to its meaningful base form. For example, 'am', 'are', and 'is' are all lemmatized into 'be'. This approach can significantly reduce the size of the vocabulary (the number of unique words) in large text corpora, thereby increasing efficiency when training models.
On the other hand, while lemmatization is more accurate, it is also more computationally expensive and can be time-consuming with large datasets. Furthermore, for even better accuracy, performing morphological analysis and part of speech tagging is recommended before lemmatization.
Lemmatization with NLTK
The WordNet Lemmatizer, provided by the NLTK library, leverages the WordNet corpus to perform lemmatization.
When you use the WordNet Lemmatizer, it looks up the target word in the WordNet database to find the most appropriate lemma (base form) of the word.
As mentioned above, because words can have different meanings in different contexts (e.g., "running" as a verb vs. "running" as a noun), the lemmatizer may require you to specify the part of speech (e.g., verb, noun, adjective). This helps it select the correct lemma based on the word's role in a sentence.
Let's now take a look at an example:
from nltk.stem import WordNetLemmatizer import nltk # Download the WordNet corpus nltk.download('wordnet') # Initialize the WordNet lemmatizer lemmatizer = WordNetLemmatizer() words = ["running", "bested"] # Lemmatize words lemmatized_words = [lemmatizer.lemmatize(word, 'v') for word in words] # 'v' for verb print("Lemmatized words:", lemmatized_words)
As you can see, from the coding perspective, the approach is rather straightforward. Once again, in order to lemmatize words accurately across an entire corpus, it would be best to first perform part-of-speech (POS) tagging, which we will cover in the following chapter.
Swipe to show code editor
Your task is to lemmatize the tokens, given the text
string. Tokenization is already applied with stop words filtered out.
-
Import the WordNet lemmatizer.
-
Download the WordNet corpus.
-
Initialize the WordNet lemmatizer.
-
Lemmatize the tokens using list comprehension.