Differences between revisions 22 and 23
Revision 22 as of 2010-12-02 13:34:17
Size: 8044
Comment:
Revision 23 as of 2014-05-19 06:45:58
Size: 118
Comment:
Deletions are marked like this. Additions are marked like this.
Line 2: Line 2:
<<TableOfContents()>>
== Introduction ==
Line 5: Line 3:
Most natural language processing systems rely heavily on information about words and their meanings as provided by a lexicon.
However, a lexicon is never complete. Language evolves constantly, among others due to morphological productivity, sense extensions, loans from other languages, and the constant introduction of new technological and scientific terminology.
Since the manual maintenance of lexicons is not only slow, but also susceptible to inconsistencies, automatic acquisition of lexical information has become an important research area and a practical necessity for large systems working with real data.

The goal of the DFG-sponsored research project ''!WordGraph'' is to develop new approaches for the acquisition of lexical information from text corpora. These approaches are based on graph theory.

Relationships between words in a text can be naturally represented by a graph which has words as nodes and relationships between them as edges. The nodes and edges in such a textual graph are of various types. Node types correspond to word classes (e.g. nouns, verbs, adjectives), and edge types represent different kinds of dependencies between them (e.g. syntactic dependencies, joint occurrence in a coordination, co-occurrence). The meaning of a word is characterized by its relationships (links) to the other words (nodes) in the word graph. The connectivity structure of the word graph thus contains valuable information about words and their meanings.

In particular, we are investigating node similarity algorithms such as !SimRank for the induction and extension of bilingual lexicons.


== Resources ==

As part of this ongoing project, we have created resources that we believe to be useful for other researchers on lexical acquisition as well as the general NLP research community. We provide these resources as a service to the community.

=== Noun Coordination Data ===

Large dataset of nouns that occur together in a coordination (such as "X and Y"). Extracted from Wikipedia.

 * English noun coordinations (approx. 5.8M coordinations): [[http://www.ims.uni-stuttgart.de/tcl/RESOURCES/WordGraph/en-noun-coordinations.txt.gz|Download EN data (gzipped, 114MB)]]
 * German noun coordinations (approx. 2.2M coordinations): [[http://www.ims.uni-stuttgart.de/tcl/RESOURCES/WordGraph/de-noun-coordinations.txt.gz|Download DE data (gzipped, 50MB)]]

Each line contains a single coordination.
Each word is annotated with part of speech tag and lemma, separated by slashes: Word/Tag/Lemma

Examples:
{{{
complexity/NN/complexity and/CC/and length/NN/length
history/NN/history and/CC/and cultural/JJ/cultural heritage/NN/heritage

Luft/NN/Luft und/KON/und Wasser/NN/Wasser
der/ART/d Starbesetzung/NN/Starbesetzung und/KON/und der/ART/d technischen/ADJA/technisch Raffinessen/NN/Raffinesse
}}}

=== Adjective-Noun modification data ===

List of adjectives modifying nouns (extracted from Wikipedia)

 * [[http://www.ims.uni-stuttgart.de/tcl/RESOURCES/WordGraph/en-adj-n.gz|Download EN data (gzipped, 157MB)]] (32M relationships)
 * [[http://www.ims.uni-stuttgart.de/tcl/RESOURCES/WordGraph/de-adj-n.gz|Download DE data (gzipped, 71MB)]] (12M relationships)

Each line contains a single adjective-noun pair.

Examples:
{{{
left-wing ideology
political party
religious leader

chemisch Element
deutsch Film
grell Lampe
}}}
=== Verb-object data ===

List of verbs and their direct object (extracted from the Wikipedia-derived parse trees above).

 * [[http://www.ims.uni-stuttgart.de/tcl/RESOURCES/WordGraph/en-v-obj.gz|Download EN data (gzipped, 5.3MB)]] (11.7M relationships)
 * [[http://www.ims.uni-stuttgart.de/tcl/RESOURCES/WordGraph/de-v-obj.gz|Download DE data (gzipped, 1.6MB)]] (1.6M relationships)

Each line contains a single verb-object pair.

Examples:
{{{
turn#off brain
outwit enemy
rouse suspicion

abfahren Strecke
weiterentwickeln Technik
annehmen Ruf
}}}
=== Cross-lingual Relatedness Thesaurus ===

We used graph similarity algorithms to create a bilingual semantic relatedness thesaurus. For every English word, there are ten German words deemed related by the algorithm. and vice versa. The method used to create this resource will be described in a forthcoming publication (accepted at LREC2010).

 * English->German relatedness data (approx. 9000 entries): [[http://www.ims.uni-stuttgart.de/tcl/RESOURCES/WordGraph/relatedness_en_de.txt.gz|Download EN->DE data]]
 * German->English relatedness data (approx. 6000 entries): [[http://www.ims.uni-stuttgart.de/tcl/RESOURCES/WordGraph/relatedness_de_en.txt.gz|Download DE->EN data]]

The data comes in gzipped text files, each containing blocks of one word and ten related words, each on its own line. The lines of the related words are indented with a TAB character. The next block is separated by an empty line.

Example:
{{{
(lion,n)
        (Panther,n)
        (Nashorn,n)
        (Löwe,n)
        (Büffel,n)
        (Jaguar,n)
        (Leopard,n)
        (Tiger,n)
        (Puma,n)
        (Elefant,n)
        (Antilope,n)

(Möwe,n)
        (gull,n)
        (swan,n)
        (goose,n)
        (duck,n)
        (teal,n)
        (flamingo,n)
        (loon,n)
        (grebe,n)
        (cormorant,n)
        (tern,n)
}}}

=== Parsed Wikipedia Data ===

We have parsed the text of English and German Wikipedia articles using [[http://www.ims.uni-stuttgart.de/tcl/SOFTWARE/BitPar.html|BitPar]].
This is one of the few large collections of compararable text parsed with the same parser.

 * English parses (3,4GB, approx. 30M sentences): [[http://www.ims.uni-stuttgart.de/tcl/RESOURCES/WordGraph/parsed-english-wp.tar.gz|Download English parses]]
 * German parses (1,6GB, approx. 12.7M sentences): [[http://www.ims.uni-stuttgart.de/tcl/RESOURCES/WordGraph/parsed-german-wp.tar.gz|Download German parses]]

The data comes in an archive that bundles gzipped files, each containing about 500 parsed sentences.
Each line consists the parse tree of one sentence encoded as structure of nested brackets.

Example sentence ''"It is one of 58 counties of Gansu."'':
{{{
(TOP
 (S/fin/.
  (NP-SBJ/3s/base
   (PRP/3s It))
  (VP/3s
   (VBZ/n is)
   (NP-PRD/pp
    (NP/base
     (QP
      (\<QP\[CD\]IN/of|CD\>
       (CD one)
       (IN/of of))
      (CD 58))
     (NNS counties))
    (PP/of/NP
     (IN/of of)
     (NP/base
      (NNP Gansu)))))
  (. .)))
}}}

=== Co-occurrence Data ===

List of co-occurring word tuples extracted from Wikipedia.
The word-word coocurrences were extracted using a cooccurrence window of 3 consecutive words. The files contain tables in the form of Stefan [[Evert's UCS toolkit|http://www.collocations.de/software.html]].
The ''l1'' column contains the left word, the ''l2'' column the right word. The right word is appended with "_x" where x is the position in the context window. Example ""Aachen amtlich_3"" means that the word "amtlich" occurred to the right of "Aachen", with two words inbetween. Also included are frequency counts of the left word (''f1''), the right word (''f2'', positions are distinguished), counts of the pair ''f'', and log-likelihood statistics ''am.log.likelihood''.
The pairs are filtered for space reasons, only pairs with occured more than once, with individual word frequencies > 100 and a log-likelihood ration > 3.87 are included.

(Tables coming soon)


=== Lexicon Induction Test Dataset ===

Comparative evaluation of methods for bilingual lexicon induction is hampered by the lack of a common evaluation methodology and a common test dataset. Together with [[http://www.fask.uni-mainz.de/user/rapp/|Reinhard Rapp]] (Johannes Gutenberg University Mainz), we propose a common test dataset for the evaluation of lexicon induction experiments. We hope that this data will serve as a basis for a standard evaluation.
Please visit: http://www.ims.uni-stuttgart.de/forschung/projekte/WordGraph.en.html

extern/WordGraph (last edited 2014-05-19 06:47:08 by AndreBlessing)