context2vec

context2vec is a toolkit that represents sentential contexts of target words, as well as target words themselves, as low dimensional continuous vectors, commonly called embeddings. It is described in the following paper:

context2vec: Learning Generic Context Embedding with Bidirectional LSTM. Oren Melamud, Jacob Goldberger, Ido Dagan. CoNLL, 2016 [pdf].

The source code is available [here].

The following pre-trained context2vec models are available for download:
* context2vec model learned from UkWac [here]
* context2vec model learned from MSCC [here]

The following baseline Average of Word Embedding (AWE) models are also available:
* AWE model learned from UkWac [here]
* AWE model learned from MSCC [here]