1 of 80

Word Meaning and Similarity

Word Senses and Word Relations

Dan Jurafsky

2 of 80

Reminder: lemma and wordform

  • A lemma or citation form
    • Same stem, part of speech, rough semantics
  • A wordform
    • The “inflected” word as it appears in text

Wordform

Lemma

banks

bank

sung

sing

duermes

dormir

Dan Jurafsky

3 of 80

Lemmas have senses

  • One lemma “bank” can have many meanings:
      • …a bank can hold the investments in a custodial account…
      • “…as agriculture burgeons on the east bank the river will shrink even more
  • Sense (or word sense)
    • A discrete representation

of an aspect of a word’s meaning.

  • The lemma bank here has two senses

1

2

Sense 1:

Sense 2:

Dan Jurafsky

4 of 80

Homonymy

Homonyms: words that share a form but have unrelated, distinct meanings:

    • bank1: financial institution, bank2: sloping land
    • bat1: club for hitting a ball, bat2: nocturnal flying mammal
  • Homographs (bank/bank, bat/bat)
  • Homophones:
    1. Write and right
    2. Piece and peace

Dan Jurafsky

5 of 80

Homonymy causes problems for NLP applications

  • Information retrieval
    • bat care”
  • Machine Translation
    • bat: murciélago (animal) or bate (for baseball)
  • Text-to-Speech
    • bass (stringed instrument) vs. bass (fish)

Dan Jurafsky

6 of 80

Polysemy

  • 1. The bank was constructed in 1875 out of local red brick.
  • 2. I withdrew the money from the bank
  • Are those the same sense?
    • Sense 2: “A financial institution”
    • Sense 1: “The building belonging to a financial institution”
  • A polysemous word has related meanings
    • Most non-rare words have multiple meanings

Dan Jurafsky

7 of 80

Metonymy or Systematic Polysemy: �A systematic relationship between senses

  • Lots of types of polysemy are systematic
    • School, university, hospital
    • All can mean the institution or the building.
  • A systematic relationship:
    • Building Organization
  • Other such kinds of systematic polysemy:

Author (Jane Austen wrote Emma)

Works of Author (I love Jane Austen)

Tree (Plums have beautiful blossoms)

Fruit (I ate a preserved plum)

Dan Jurafsky

8 of 80

How do we know when a word has more than one sense?

  • The “zeugma” test: Two senses of serve?
    • Which flights serve breakfast?
    • Does Lufthansa serve Philadelphia?
    • ?Does Lufthansa serve breakfast and San Jose?
  • Since this conjunction sounds weird,
    • we say that these are two different senses of “serve”

Dan Jurafsky

9 of 80

Synonyms

  • Word that have the same meaning in some or all contexts.
    • filbert / hazelnut
    • couch / sofa
    • big / large
    • automobile / car
    • vomit / throw up
    • Water / H20
  • Two lexemes are synonyms
    • if they can be substituted for each other in all situations
    • If so they have the same propositional meaning

Dan Jurafsky

10 of 80

Synonyms

  • But there are few (or no) examples of perfect synonymy.
    • Even if many aspects of meaning are identical
    • Still may not preserve the acceptability based on notions of politeness, slang, register, genre, etc.
  • Example:
    • Water/H20
    • Big/large
    • Brave/courageous

Dan Jurafsky

11 of 80

Synonymy is a relation �between senses rather than words

  • Consider the words big and large
  • Are they synonyms?
    • How big is that plane?
    • Would I be flying on a large or small plane?
  • How about here:
    • Miss Nelson became a kind of big sister to Benjamin.
    • ?Miss Nelson became a kind of large sister to Benjamin.
  • Why?
    • big has a sense that means being older, or grown up
    • large lacks this sense

Dan Jurafsky

12 of 80

Antonyms

  • Senses that are opposites with respect to one feature of meaning
  • Otherwise, they are very similar!

dark/light short/long fast/slow rise/fall

hot/cold up/down in/out

  • More formally: antonyms can
    • define a binary opposition

or be at opposite ends of a scale

      • long/short, fast/slow
    • Be reversives:
      • rise/fall, up/down

Dan Jurafsky

13 of 80

Hyponymy and Hypernymy

  • One sense is a hyponym of another if the first sense is more specific, denoting a subclass of the other
    • car is a hyponym of vehicle
    • mango is a hyponym of fruit
  • Conversely hypernym/superordinate (“hyper is super”)
    • vehicle is a hypernym of car
    • fruit is a hypernym of mango

Superordinate/hyper

vehicle

fruit

furniture

Subordinate/hyponym

car

mango

chair

Dan Jurafsky

14 of 80

Hyponymy more formally

  • Extensional:
    • The class denoted by the superordinate extensionally includes the class denoted by the hyponym
  • Entailment:
    • A sense A is a hyponym of sense B if being an A entails being a B
  • Hyponymy is usually transitive
    • (A hypo B and B hypo C entails A hypo C)
  • Another name: the IS-A hierarchy
    • A IS-A B (or A ISA B)
    • B subsumes A

Dan Jurafsky

15 of 80

Hyponyms and Instances

  • WordNet has both classes and instances.
  • An instance is an individual, a proper noun that is a unique entity
      • San Francisco is an instance of city
    • But city is a class
      • city is a hyponym of municipality...location...

15

Dan Jurafsky

16 of 80

Word Meaning and Similarity

Word Senses and Word Relations

Dan Jurafsky

17 of 80

Word Meaning and Similarity

WordNet and other Online Thesauri

Dan Jurafsky

18 of 80

Applications of Thesauri and Ontologies

  • Information Extraction
  • Information Retrieval
  • Question Answering
  • Bioinformatics and Medical Informatics
  • Machine Translation

Dan Jurafsky

19 of 80

WordNet 3.0

  • A hierarchically organized lexical database
  • On-line thesaurus + aspects of a dictionary
      • Some other languages available or under development
        • (Arabic, Finnish, German, Portuguese…)

Category

Unique Strings

Noun

117,798

Verb

11,529

Adjective

22,479

Adverb

4,481

Dan Jurafsky

20 of 80

Senses of “bass” in Wordnet

Dan Jurafsky

21 of 80

How is “sense” defined in WordNet?

  • The synset (synonym set), the set of near-synonyms, instantiates a sense or concept, with a gloss
  • Example: chump as a noun with the gloss:

“a person who is gullible and easy to take advantage of”

  • This sense of “chump” is shared by 9 words:

chump1, fool2, gull1, mark9, patsy1, fall guy1, sucker1, soft touch1, mug2

  • Each of these senses have this same gloss
    • (Not every sense; sense 2 of gull is the aquatic bird)

Dan Jurafsky

22 of 80

WordNet Hypernym Hierarchy for “bass”

Dan Jurafsky

23 of 80

WordNet Noun Relations

Dan Jurafsky

24 of 80

WordNet 3.0

  • Where it is:
  • Libraries
    • Python: WordNet from NLTK
      • http://www.nltk.org/Home
    • Java:
      • JWNL, extJWNL on sourceforge

Dan Jurafsky

25 of 80

MeSH: Medical Subject Headings�thesaurus from the National Library of Medicine

  • MeSH (Medical Subject Headings)
    • 177,000 entry terms that correspond to 26,142 biomedical “headings”

  • Hemoglobins

Entry Terms: Eryhem, Ferrous Hemoglobin, Hemoglobin

Definition: The oxygen-carrying proteins of ERYTHROCYTES. They are found in all vertebrates and some invertebrates. The number of globin subunits in the hemoglobin quaternary structure differs between species. Structures range from monomeric to a variety of multimeric arrangements

Synset

Dan Jurafsky

26 of 80

The MeSH Hierarchy

  • a

26

Dan Jurafsky

27 of 80

Uses of the MeSH Ontology

  • Provide synonyms (“entry terms”)
    • E.g., glucose and dextrose
  • Provide hypernyms (from the hierarchy)
    • E.g., glucose ISA monosaccharide
  • Indexing in MEDLINE/PubMED database
    • NLM’s bibliographic database:
      • 20 million journal articles
      • Each article hand-assigned 10-20 MeSH terms

Dan Jurafsky

28 of 80

Word Meaning and Similarity

WordNet and other Online Thesauri

Dan Jurafsky

29 of 80

Word Meaning and Similarity

Word Similarity: Thesaurus Methods

Dan Jurafsky

30 of 80

Word Similarity

  • Synonymy: a binary relation
    • Two words are either synonymous or not
  • Similarity (or distance): a looser metric
    • Two words are more similar if they share more features of meaning
  • Similarity is properly a relation between senses
    • The word “bank” is not similar to the word “slope
    • Bank1 is similar to fund3
    • Bank2 is similar to slope5
  • But we’ll compute similarity over both words and senses

Dan Jurafsky

31 of 80

Why word similarity

  • Information retrieval
  • Question answering
  • Machine translation
  • Natural language generation
  • Language modeling
  • Automatic essay grading
  • Plagiarism detection
  • Document clustering

Dan Jurafsky

32 of 80

Word similarity and word relatedness

  • We often distinguish word similarity from word relatedness
    • Similar words: near-synonyms
    • Related words: can be related any way
      • car, bicycle: similar
      • car, gasoline: related, not similar

Dan Jurafsky

33 of 80

Two classes of similarity algorithms

  • Thesaurus-based algorithms
    • Are words “nearby” in hypernym hierarchy?
    • Do words have similar glosses (definitions)?
  • Distributional algorithms
    • Do words have similar distributional contexts?

Dan Jurafsky

34 of 80

Path based similarity

  • Two concepts (senses/synsets) are similar if they are near each other in the thesaurus hierarchy
    • =have a short path between them
    • concepts have path 1 to themselves

Dan Jurafsky

35 of 80

Refinements to path-based similarity

  • pathlen(c1,c2) = 1 + number of edges in the shortest path in the hypernym graph between sense nodes c1 and c2
  • ranges from 0 to 1 (identity)

  • simpath(c1,c2) =

  • wordsim(w1,w2) = max sim(c1,c2)

c1∈senses(w1),c2∈senses(w2)

Dan Jurafsky

36 of 80

Example: path-based similarity�simpath(c1,c2) = 1/pathlen(c1,c2)

simpath(nickel,coin) = 1/2 = .5

simpath(fund,budget) = 1/2 = .5

simpath(nickel,currency) = 1/4 = .25

simpath(nickel,money) = 1/6 = .17

simpath(coinage,Richter scale) = 1/6 = .17

Dan Jurafsky

37 of 80

Problem with basic path-based similarity

  • Assumes each link represents a uniform distance
    • But nickel to money seems to us to be closer than nickel to standard
    • Nodes high in the hierarchy are very abstract
  • We instead want a metric that
    • Represents the cost of each edge independently
    • Words connected only through abstract nodes
      • are less similar

Dan Jurafsky

38 of 80

Information content similarity metrics

  • Let’s define P(c) as:
    • The probability that a randomly selected word in a corpus is an instance of concept c
    • Formally: there is a distinct random variable, ranging over words, associated with each concept in the hierarchy
      • for a given concept, each observed noun is either
        • a member of that concept with probability P(c)
        • not a member of that concept with probability 1-P(c)
    • All words are members of the root node (Entity)
      • P(root)=1
    • The lower a node in hierarchy, the lower its probability

Resnik 1995. Using information content to evaluate semantic similarity in a taxonomy. IJCAI

Dan Jurafsky

39 of 80

Information content similarity

  • Train by counting in a corpus
    • Each instance of hill counts toward frequency

of natural elevation, geological formation, entity, etc

    • Let words(c) be the set of all words that are children of node c
      • words(“geo-formation”) = {hill,ridge,grotto,coast,cave,shore,natural elevation}
      • words(“natural elevation”) = {hill, ridge}

geological-formation

shore

hill

natural elevation

coast

cave

grotto

ridge

entity

Dan Jurafsky

40 of 80

Information content similarity

  • WordNet hierarchy augmented with probabilities P(c)

D. Lin. 1998. An Information-Theoretic Definition of Similarity. ICML 1998

Dan Jurafsky

41 of 80

Information content: definitions

  • Information content:

IC(c) = -log P(c)

  • Most informative subsumer (Lowest common subsumer)

LCS(c1,c2) =

The most informative (lowest) node in the hierarchy subsuming both c1 and c2

Dan Jurafsky

42 of 80

Using information content for similarity: the Resnik method

  • The similarity between two words is related to their common information
  • The more two words have in common, the more similar they are
  • Resnik: measure common information as:
    • The information content of the most informative

(lowest) subsumer (MIS/LCS) of the two nodes

    • simresnik(c1,c2) = -log P( LCS(c1,c2) )

Philip Resnik. 1995. Using Information Content to Evaluate Semantic Similarity in a Taxonomy. IJCAI 1995.

Philip Resnik. 1999. Semantic Similarity in a Taxonomy: An Information-Based Measure and its Application to Problems of Ambiguity in Natural Language. JAIR 11, 95-130.

Dan Jurafsky

43 of 80

Dekang Lin method

  • Intuition: Similarity between A and B is not just what they have in common
  • The more differences between A and B, the less similar they are:
    • Commonality: the more A and B have in common, the more similar they are
    • Difference: the more differences between A and B, the less similar
  • Commonality: IC(common(A,B))
  • Difference: IC(description(A,B)-IC(common(A,B))

Dekang Lin. 1998. An Information-Theoretic Definition of Similarity. ICML

Dan Jurafsky

44 of 80

Dekang Lin similarity theorem

  • The similarity between A and B is measured by the ratio between the amount of information needed to state the commonality of A and B and the information needed to fully describe what A and B are

  • Lin (altering Resnik) defines IC(common(A,B)) as 2 x information of the LCS

Dan Jurafsky

45 of 80

Lin similarity function

Dan Jurafsky

46 of 80

The (extended) Lesk Algorithm

  • A thesaurus-based measure that looks at glosses
  • Two concepts are similar if their glosses contain similar words
    • Drawing paper: paper that is specially prepared for use in drafting
    • Decal: the art of transferring designs from specially prepared paper to a wood or glass or metal surface
  • For each n-word phrase that’s in both glosses
    • Add a score of n2
    • Paper and specially prepared for 1 + 22 = 5
    • Compute overlap also for other relations
      • glosses of hypernyms and hyponyms

Dan Jurafsky

47 of 80

Summary: thesaurus-based similarity

Dan Jurafsky

48 of 80

Libraries for computing thesaurus-based similarity

  • WordNet::Similarity

48

Dan Jurafsky

49 of 80

Evaluating similarity

  • Intrinsic Evaluation:
    • Correlation between algorithm and human word similarity ratings
  • Extrinsic (task-based, end-to-end) Evaluation:
    • Malapropism (spelling error) detection
    • WSD
    • Essay grading
    • Taking TOEFL multiple-choice vocabulary tests

Levied is closest in meaning to:

imposed, believed, requested, correlated

Dan Jurafsky

50 of 80

Word Meaning and Similarity

Word Similarity: Thesaurus Methods

Dan Jurafsky

51 of 80

Word Meaning and Similarity

Word Similarity: Distributional Similarity (I)

Dan Jurafsky

52 of 80

Problems with thesaurus-based meaning

  • We don’t have a thesaurus for every language
  • Even if we do, they have problems with recall
    • Many words are missing
    • Most (if not all) phrases are missing
    • Some connections between senses are missing
    • Thesauri work less well for verbs, adjectives
      • Adjectives and verbs have less structured hyponymy relations

Dan Jurafsky

53 of 80

Distributional models of meaning

  • Also called vector-space models of meaning
  • Offer much higher recall than hand-built thesauri
    • Although they tend to have lower precision
  • Zellig Harris (1954): “oculist and eye-doctor … occur in almost the same environments…. If A and B have almost identical environments we say that they are synonyms.

  • Firth (1957): “You shall know a word by the company it keeps!”

53

  • Also called vector-space models of meaning
  • Offer much higher recall than hand-built thesauri
    • Although they tend to have lower precision

Dan Jurafsky

54 of 80

Intuition of distributional word similarity

  • Nida example:

A bottle of tesgüino is on the table

Everybody likes tesgüino

Tesgüino makes you drunk

We make tesgüino out of corn.

  • From context words humans can guess tesgüino means
    • an alcoholic beverage like beer
  • Intuition for algorithm:
    • Two words are similar if they have similar word contexts.

Dan Jurafsky

55 of 80

Reminder: Term-document matrix

  • Each cell: count of term t in a document d: tft,d:
    • Each document is a count vector in v: a column below

55

Dan Jurafsky

56 of 80

Reminder: Term-document matrix

  • Two documents are similar if their vectors are similar

56

Dan Jurafsky

57 of 80

The words in a term-document matrix

  • Each word is a count vector in D: a row below

57

Dan Jurafsky

58 of 80

The words in a term-document matrix

  • Two words are similar if their vectors are similar

58

Dan Jurafsky

59 of 80

The Term-Context matrix

  • Instead of using entire documents, use smaller contexts
    • Paragraph
    • Window of 10 words
  • A word is now defined by a vector over counts of context words

59

Dan Jurafsky

60 of 80

Sample contexts: 20 words (Brown corpus)

  • equal amount of sugar, a sliced lemon, a tablespoonful of apricot preserve or jam, a pinch each of clove and nutmeg,
  • on board for their enjoyment. Cautiously she sampled her first pineapple and another fruit whose taste she likened to that of

60

  • of a recursive type well suited to programming on the digital computer. In finding the optimal R-stage policy from that of
  • substantially affect commerce, for the purpose of gathering data and information necessary for the study authorized in the first section of this

Dan Jurafsky

61 of 80

Term-context matrix for word similarity

  • Two words are similar in meaning if their context vectors are similar

61

Dan Jurafsky

62 of 80

Should we use raw counts?

  • For the term-document matrix
    • We used tf-idf instead of raw term counts
  • For the term-context matrix
    • Positive Pointwise Mutual Information (PPMI) is common

62

Dan Jurafsky

63 of 80

Pointwise Mutual Information

  • Pointwise mutual information:
    • Do events x and y co-occur more than if they were independent?

    • PMI between two words: (Church & Hanks 1989)
    • Do words x and y co-occur more than if they were independent?

    • Positive PMI between two words (Niwa & Nitta 1994)
    • Replace all PMI values less than 0 with zero

Dan Jurafsky

64 of 80

Computing PPMI on a term-context matrix

  • Matrix F with W rows (words) and C columns (contexts)
  • fij is # of times wi occurs in context cj

64

Dan Jurafsky

65 of 80

p(w=information,c=data) =

p(w=information) =

p(c=data) =

65

= .32

6/19

11/19

= .58

7/19

= .37

Dan Jurafsky

66 of 80

  • pmi(information,data) = log2 (

66

.32 /

(.37*.58) )

= .58

(.57 using full precision)

Dan Jurafsky

67 of 80

Weighing PMI

  • PMI is biased toward infrequent events
  • Various weighting schemes help alleviate this
    • See Turney and Pantel (2010)
    • Add-one smoothing can also help

67

Dan Jurafsky

68 of 80

68

Dan Jurafsky

69 of 80

69

Dan Jurafsky

70 of 80

Word Meaning and Similarity

Word Similarity: Distributional Similarity (I)

Dan Jurafsky

71 of 80

Word Meaning and Similarity

Word Similarity: Distributional Similarity (II)

Dan Jurafsky

72 of 80

Using syntax to define a word’s context

  • Zellig Harris (1968)
    • “The meaning of entities, and the meaning of grammatical relations among them, is related to the restriction of combinations of these entities relative to other entities”
  • Two words are similar if they have similar parse contexts
  • Duty and responsibility (Chris Callison-Burch’s example)

Modified by adjectives

additional, administrative, assumed, collective, congressional, constitutional …

Objects of verbs

assert, assign, assume, attend to, avoid, become, breach …

Dan Jurafsky

73 of 80

Co-occurrence vectors based on syntactic dependencies

  • The contexts C are different dependency relations
    • Subject-of- “absorb”
    • Prepositional-object of “inside”
  • Counts for the word cell:

Dekang Lin, 1998 “Automatic Retrieval and Clustering of Similar Words”

Dan Jurafsky

74 of 80

PMI applied to dependency relations

  • “Drink it” more common than “drink wine”
  • But “wine” is a better “drinkable” thing than “it

Object of “drink”

Count

PMI

it

3

1.3

anything

3

5.2

wine

2

9.3

tea

2

11.8

liquid

2

10.5

Hindle, Don. 1990. Noun Classification from Predicate-Argument Structure. ACL

Object of “drink”

Count

PMI

tea

2

11.8

liquid

2

10.5

wine

2

9.3

anything

3

5.2

it

3

1.3

Dan Jurafsky

75 of 80

Reminder: cosine for computing similarity

Dot product

Unit vectors

vi is the PPMI value for word v in context i

wi is the PPMI value for word w in context i.

Cos(v,w) is the cosine similarity of v and w

Sec. 6.3

Dan Jurafsky

76 of 80

Cosine as a similarity metric

  • -1: vectors point in opposite directions
  • +1: vectors point in same directions
  • 0: vectors are orthogonal

  • Raw frequency or PPMI are non-negative, so cosine range 0-1

76

Dan Jurafsky

77 of 80

77

large

data

computer

apricot

1

0

0

digital

0

1

2

information

1

6

1

Which pair of words is more similar?

cosine(apricot,information) =

cosine(digital,information) =

cosine(apricot,digital) =

Dan Jurafsky

78 of 80

Other possible similarity measures

Dan Jurafsky

79 of 80

Evaluating similarity �(the same as for thesaurus-based)

  • Intrinsic Evaluation:
    • Correlation between algorithm and human word similarity ratings
  • Extrinsic (task-based, end-to-end) Evaluation:
    • Spelling error detection, WSD, essay grading
    • Taking TOEFL multiple-choice vocabulary tests

Levied is closest in meaning to which of these:

imposed, believed, requested, correlated

Dan Jurafsky

80 of 80

Word Meaning and Similarity

Word Similarity: Distributional Similarity (II)

Dan Jurafsky