Word Embeddings for Lexical Substitution

We propose a simple method for performing context-sensitive lexical substitution using skip-gram word and context embeddings in our paper: Oren Melamud, Omer Levy, Ido Dagan. “A Simple Word Embedding Model for Lexical Substitution”. In Proceedings of the Workshop on Vector Space Modeling for NLP, 2015.

The embeddings used in this paper are available for download:
[word embeddings]
[context embeddings]

This python code was used to convert the parsed text to the dependency pairs, which were used to learn the above word and context embeddings with word2vecf:
[extract_deps.py]

Note that these word embeddings are not lemmatized.

To evaluate the model against lemmatized lexical substitution gold standards, the substitutability score for a lemma was considered to be the maximum score over all possible word forms for this lemma.

The code used for evaluating this model is available [here].