This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English version of the page.

Note: This page has been translated by MathWorks. Click here to see
To view all translated materials including this page, select Country from the country navigator on the bottom of this page.

removeStopWords

Remove stop words from documents

Words like "a", "and", "to", and "the" (known as stop words) can add noise to data. Use this function to remove stop words before analysis.

The function supports English, Japanese, and German text. To learn how to use removeStopWords for other languages, see Language Considerations.

Syntax

newDocuments = removeStopWords(documents)

Description

example

newDocuments = removeStopWords(documents) removes the stop words from the tokenizedDocument array documents.

Tip

Use removeStopWords before using the normalizeWords function as removeStopWords uses information that is removed by this function.

Examples

collapse all

Remove the stop words from an array of documents using removeStopWords. The tokenizedDocument function detects that the documents are in English, so removeStopWords removes English stop words.

documents = tokenizedDocument([
    "an example of a short sentence" 
    "a second short sentence"]);
newDocuments = removeStopWords(documents)
newDocuments = 
  2x1 tokenizedDocument:

    3 tokens: example short sentence
    3 tokens: second short sentence

Tokenize Japanese text using tokenizedDocument. The function automatically detects Japanese text.

str = [
    "ここは静かなので、とても穏やかです"
    "企業内の顧客データを利用し、今年の売り上げを調べることが出来た。"
    "私は先生です。私は英語を教えています。"];
documents = tokenizedDocument(str);

Remove stop words using removeStopWords. The function uses the language details from documents to determine which language stop words to remove.

documents = removeStopWords(documents)
documents = 
  3x1 tokenizedDocument:

     4 tokens: 静か 、 とても 穏やか
    10 tokens: 企業 顧客 データ 利用 、 今年 売り上げ 調べる 出来 。
     5 tokens: 先生 。 英語 教え 。

Tokenize German text using tokenizedDocument. The function automatically detects German text.

str = [
    "Guten Morgen. Wie geht es dir?"
    "Heute wird ein guter Tag."];
documents = tokenizedDocument(str)
documents = 
  2x1 tokenizedDocument:

    8 tokens: Guten Morgen . Wie geht es dir ?
    6 tokens: Heute wird ein guter Tag .

Remove stop words using the removeStopWords function. The function uses the language details from documents to determine which language stop words to remove.

documents = removeStopWords(documents)
documents = 
  2x1 tokenizedDocument:

    5 tokens: Guten Morgen . geht ?
    5 tokens: Heute wird guter Tag .

Input Arguments

collapse all

Input documents, specified as a tokenizedDocument array.

Output Arguments

collapse all

Output documents, returned as a tokenizedDocument array.

More About

collapse all

Language Considerations

The stopWords and removeStopWords functions support English, Japanese, and German stop words only.

To remove stop words from other languages, use removeWords and specify your own stop words to remove.

Algorithms

collapse all

Language Details

tokenizedDocument objects contain details about the tokens including language details. The language details of the input documents determine the behavior of removeStopWords. The tokenizedDocument function, by default, automatically detects the language of the input text. To specify the language details manually, use the 'Language' name-value pair argument of tokenizedDocument. To view the token details, use the tokenDetails function.

Introduced in R2018b