Stemming and lemmatizing are preprocessing techniques in NLP that reduce words to their root or base form, as discussed in NVIDIA’s Generative AI and LLMs course. In the context of computing TF-IDF (Term Frequency-Inverse Document Frequency) vectors, these techniques are beneficial because they collapse variant forms of a word (e.g., “running,” “ran” to “run”) into a single token, reducing the number of unique tokens in the corpus. This decreases noise and dimensionality, improving the efficiency and effectiveness of TF-IDF representations for tasks like document classification or clustering. Option B is incorrect, as stemming and lemmatizing are not about aesthetics but about data preprocessing. Option C is wrong, as these techniques reduce, not increase, the number of unique tokens. Option D is inaccurate, as they do not guarantee accuracy improvements but rather reduce noise. The course states: “Stemming and lemmatizing reduce the number of unique tokens in a corpus by normalizing word forms, improving the quality of TF-IDF vectors by minimizing noise and dimensionality.”
[References: NVIDIA Building Transformer-Based Natural Language Processing Applications course; NVIDIA Introduction to Transformer-Based Natural Language Processing., ]
Contribute your Thoughts:
Chosen Answer:
This is a voting comment (?). You can switch to a simple comment. It is better to Upvote an existing comment if you don't have anything to add.
Submit