Text Processing, Tokenization,�& Characteristics
By Elmurod Kuriyozov
Previously
Text
Why the focus on text?
Text Documents
A text digital document consists of a sequence of words and other symbols, e.g., punctuation.
The individual words and other symbols are known as tokens or terms.
A textual document can be:
• Free text, also known as unstructured text, which is a
continuous sequence of tokens.
• Fielded text, also known as structured text, in which the text
is broken into sections that are distinguished by tags or other
markup.
Examples?
Text Based Information Retrieval
Most matching methods are based on Boolean operators.
Most ranking methods are based on the vector space model.
Web search methods combine vector space model with ranking based on importance of documents.
Many practical systems combine features of several approaches.
In the basic form, all approaches treat words as separate tokens with minimal attempt to interpret them linguistically.
Interface
Query Engine
Indexer
Index
Crawler
Users
Web
A Typical Web Search Engine
Text processing
(preprocessing)
Pre-indexing
Focus on documents
Decide what is an individual document
Can vary depending on problem
Building an index
What is a Document?
What is Text?
Collection of text
Written corpora
B
r
o
w
n
L
O
B
T
i
m
e
o
f
c
o
m
p
i
l
a
t
i
o
n
1
9
6
0
s
1
9
7
0
s
C
o
m
p
i
l
e
d
a
t
B
r
o
w
n
U
n
i
v
e
r
s
i
t
y
(
U
S
)
L
a
n
c
a
s
t
e
r
,
O
s
l
o
,
B
e
r
g
e
n
L
a
n
g
u
a
g
e
v
a
r
i
e
t
y
W
r
i
t
t
e
n
A
m
e
r
i
c
a
n
E
n
g
l
i
s
h
W
r
i
t
t
e
n
B
r
i
t
i
s
h
E
n
g
l
i
s
h
S
i
z
e
1
m
i
l
l
i
o
n
w
o
r
d
s
(
5
0
0
t
e
x
t
s
o
f
2
0
0
0
w
o
r
d
s
e
a
c
h
)
D
e
s
i
g
n
B
a
l
a
n
c
e
d
c
o
r
p
o
r
a
;
1
5
g
e
n
r
e
s
o
f
t
e
x
t
,
i
n
c
l
.
p
r
e
s
s
r
e
p
o
r
t
a
g
e
,
e
d
i
t
o
r
i
a
l
s
,
r
e
v
i
e
w
s
,
r
e
l
i
g
i
o
n
,
g
o
v
e
r
n
m
e
n
t
d
o
c
u
m
e
n
t
s
,
r
e
p
o
r
t
s
,
b
i
o
g
r
a
p
h
i
e
s
,
s
c
i
e
n
t
i
f
i
c
w
r
i
t
i
n
g
,
f
i
c
t
i
o
n
Text Processing – Lexical Analysis
Basic indexing pipeline
Tokenizer
Token stream.
Friends
Romans
Countrymen
Linguistic modules
Modified tokens (terms).
friend
roman
countryman
Indexer
Inverted index.
friend
roman
countryman
2
4
2
13
16
1
Documents to
be indexed.
Friends, Romans, countrymen.
Parsing a document�(lexical analysis)
Each of these is a classification problem which can be solved using heuristics or machine learning methods.
But there are complications …
Format/language stripping
Tokenization is the basic part of document preprocessing
Tokenization
Tokenization
Tokenization
Sometimes called ”parsers”
What tokenization did you use?
Natural Language Toolkit (NLTK)
Lots of tokenizers out there
Tokenization example
Tokenization
Finland? Finlands? Finland’s?
Numbers
Tokenization: Language issues
Tokenization: language issues
フォーチュン500社は情報不足のため時間あた$500K(約6,000万円)
Katakana
Hiragana
Kanji
“Romaji”
End-user can express query entirely in hiragana!
Tokenization: language issues
Normalization
Case folding
Normalizing Punctuation
Thesauri and soundex
Stemming and Morphological Analysis
Lemmatization
Stemming
Morphological variants of a word (morphemes). Similar terms derived from a common stem:
engineer, engineered, engineering
use, user, users, used, using
Stemming in Information Retrieval. Grouping words with a common stem together.
For example, a search on reads, also finds read, reading, and readable
Stemming consists of removing suffixes and conflating the resulting morphemes. Occasionally, prefixes are also removed.
Stemming
for example compressed
and compression are both
accepted as equivalent to
compress.
for exampl compress and
compress ar both accept
as equival to compress
Porter’s algorithm
Typical rules in Porter
Other stemmers
Automated Methods are the norm
Porter’s algorithm
Categories of Stemmer
The following diagram illustrate the various categories of stemmer. Porter's algorithm is shown by the red path.
Conflation methods
Manual Automatic (stemmers)
Affix Successor Table n-gram
removal variety lookup
Longest Simple
match removal
Comparison of stemmers
Stemming in Practice
Evaluation studies have found that stemming can affect retrieval
performance, usually for the better, but the results are mixed.
• Effectiveness is dependent on the vocabulary. Fine distinctions may be lost through stemming.
• Automatic stemming is as effective as manual conflation.
• Performance of various algorithms is similar.
Porter's Algorithm is entirely empirical, but has proved to be an
effective algorithm for stemming English text with trained users.
Language-specificity
Normalization: other languages
Normalization: other languages
7月30日 vs. 7/30
Morgen will ich in MIT …
Is this
German “mit”?
Dictionary entries – first cut
ensemble.french |
時間.japanese |
MIT.english |
mit.german |
guaranteed.english |
entries.english |
sometimes.english |
tokenization.english |
These may be grouped by language. More on this in ranking/query processing.
Interface
Query Engine
Indexer
Index
Crawler
Users
Web
A Typical Web Search Engine
Content Analysis of Text
Techniques for Content Analysis
Stop Lists
Suggestions for Including Words in a Stop List
• Include the most common words in the English language (perhaps 50 to 250 words).
• Do not include words that might be important for retrieval (Among the 200 most frequently occurring words in general literature in English are time, war, home, life, water, and world).
• In addition, include words that are very common in context (e.g., computer, information, system in a set of computing documents).
Example: the WAIS stop list�(first 84 of 363 multi-letter words)
about above according across actually adj after afterwards again against all almost alone along already also although always among amongst an another any anyhow anyone anything anywhere are aren't around at be became because become becomes becoming been before beforehand begin beginning behind being below beside besides between beyond billion both but by can can't cannot caption co could couldn't
did didn't do does doesn't don't down during each eg eight eighty
either else elsewhere end ending enough
etc even ever every everyone everything
Stop list policies
How many words should be in the stop list?
• Long list lowers recall
Which words should be in list?
• Some common words may have retrieval importance:
-- war, home, life, water, world
• In certain domains, some words are very common:
-- computer, program, source, machine, language
There is very little systematic evidence to use in selecting a stop list.
Stop Lists in Practice
The modern tendency is:
Token generation - stemming
Stemming
for example compressed
and compression are both
accepted as equivalent to
compress.
for exampl compres and
compres are both accept as
equival to compres.
Analyzer: �Lucene Tokenization
Analysis example
Another analysis example
Example of tokenizing a document with different Lucene Analyzers
What’s inside an Analyzer?
TokenStream
Tokenizer
TokenFilter
Reader
Tokenizer
TokenFilter
TokenFilter
lemmatization
Lucene Tokenizers and TokenFilters
Other Analyzers
Analyzer for Lucene
Summary of Text
Selection of tokens, weights, stop lists and stemming
Special purpose collections (e.g., law, medicine, monographs)
Best results are obtained by tuning the search engine for the characteristics of the collections and the expected queries.
It is valuable to use a training set of queries, with lists of relevant documents, to tune the system for each application.
General purpose collections (e.g., web search)
The modern practice is to use a basic weighting scheme (e.g., tf.idf), a simple definition of token, a short stop list and no stemming except for plurals, with minimal conflation.
Web searching combine similarity ranking with ranking based on document importance.
Indexing Subsystem
Documents
break into tokens
stop list*
stemming*
term weighting*
Index database
text
non-stoplist tokens
tokens
stemmed terms
terms with weights
*Indicates optional operation.
assign document IDs
documents
document numbers and *field numbers
Search Subsystem
Index database
query
parse query
stemming*
stemmed terms
stop list*
non-stoplist tokens
query tokens
Boolean operations*
ranking*
relevant document set
ranked document set
retrieved document set
*Indicates optional operation.
Try out queries on Google
Google Translate Example - problems
Statistical Properties of Text
Zipf Distribution
A More Standard Collection
8164 the
4771 of
4005 to
2834 a
2827 and
2802 in
1592 The
1370 for
1326 is
1324 s
1194 that
973 by
969 on
915 FT
883 Mr
860 was
855 be
849 Pounds
798 TEXT
798 PUB
798 PROFILE
798 PAGE
798 HEADLINE
798 DOCNO
1 ABC
1 ABFT
1 ABOUT
1 ACFT
1 ACI
1 ACQUI
1 ACQUISITIONS
1 ACSIS
1 ADFT
1 ADVISERS
1 AE
Government documents, 157734 tokens, 32259 unique
Plotting Word Frequency by Rank
Rank Freq Term�1 37 system�2 32 knowledg�3 24 base�4 20 problem�5 18 abstract�6 15 model�7 15 languag�8 15 implem�9 13 reason�10 13 inform�11 11 expert�12 11 analysi�13 10 rule�14 10 program�15 10 oper�16 10 evalu�17 10 comput�18 10 case�19 9 gener�20 9 form
150 2 enhanc
151 2 energi
152 2 emphasi
153 2 detect
154 2 desir
155 2 date
156 2 critic
157 2 content
158 2 consider
159 2 concern
160 2 compon
161 2 compar
162 2 commerci
163 2 clause
164 2 aspect
165 2 area
166 2 aim
167 2 affect
Most and Least Frequent Terms
Rank Freq�1 37 system�2 32 knowledg�3 24 base�4 20 problem�5 18 abstract�6 15 model�7 15 languag�8 15 implem�9 13 reason�10 13 inform�11 11 expert�12 11 analysi�13 10 rule�14 10 program�15 10 oper�16 10 evalu�17 10 comput�18 10 case�19 9 gener�20 9 form
The Corresponding Zipf Curve
43 6 approach�44 5 work�45 5 variabl�46 5 theori�47 5 specif�48 5 softwar�49 5 requir�50 5 potenti�51 5 method�52 5 mean�53 5 inher�54 5 data�55 5 commit�56 5 applic�57 4 tool�58 4 technolog�59 4 techniqu
Zoom in on the Knee of the Curve
Zipf Distribution
Zipf Distribution
Zipf Distribution�(linear and log scale)
What Kinds of Data Exhibit a Zipf Distribution?
Prevalence of Zipfian Laws
Power Laws
Power Law Statistics - problems with means
Power-law distributions
p(k) = Ck-α
Power-law signature
degree
frequency
log degree
log frequency
α
log p(k) = -α logk + logC
Examples of degree distribution for power laws
Taken from [Newman 2003]
Power Law Statistics - long tails
Power of the long tail:
The phrase The Long Tail, as a proper noun, was first coined by Chris Anderson. The concept drew in part from an influential February 2003 essay by Clay Shirky, "Power Laws, Weblogs and Inequality" that noted that a relative handful of weblogs have many links going into them but "the long tail" of millions of weblogs may have only a handful of links going into them. Beginning in a series of speeches in early 2004 and culminating with the publication of a Wired magazine article in October 2004, Anderson described the effects of the long tail on current and future business models. Anderson later extended it into the book The Long Tail: Why the Future of Business is Selling Less of More (2006).
Anderson argued that products that are in low demand or have low sales volume can collectively make up a market share that rivals or exceeds the relatively few current bestsellers and blockbusters, if the store or distribution channel is large enough. Examples of such mega-stores include the online retailer Amazon.com and the online video rental service Netflix. The Long Tail is a potential market and, as the examples illustrate, the distribution and sales channel opportunities created by the Internet often enable businesses to tap into that market successfully.
Word Frequency vs. Resolving Power
The most frequent words are not the most descriptive.
van Rijsbergen 79
Fit to Zipf for Brown Corpus
k = 100,000
Does Real Data Fit Zipf’s Law?
Mandelbrot (1954) Correction
Mandelbrot Fit
P = 105.4, B = 1.15, ρ = 100
Explanations for Zipf’s Law
Zipf’s Law Impact on IR
Vocabulary Growth
Heaps’ Law
H.S. Heaps, G. Herdon
Heaps’ Law Data
Explanation for Heaps’ Law
Consequences for Heaps’ Law
Consequences of Zipf for IR
Text Metadata
Metadata (cont)
Web Metadata
Markup Languages
GML(1969)
SGML (1985)
HTML (1993)
XML (1998)
Standard
HyperText
eXtensible
Markup Languages
Text as queries
Lemmatization vs stemming
Text - Summary