With the advent of massive online encyclopedic corpora such as Wikipedia, it has become possible to apply a systematic analysis
to a wide range of documents covering a significant part of human knowledge. Using semantic parsers, it has become possible
to extract such knowledge in the form of propositions (predicate–argument structures) and build large proposition databases from
these documents. This paper describes the creation of multilingual proposition databases using generic semantic dependency parsing.
Using Wikipedia, we extracted, processed, clustered, and evaluated a large number of propositions. We built an architecture to
provide a complete pipeline dealing with the input of text, extraction of knowledge, storage, and presentation of the resulting propositions.
Exner, P., and P. Nugues. "Constructing large proposition databases." Proc. of LREC. Vol. 12. 2012.