The word embeddings used in our paper below are available for download here:
These embeddings were learned with a 10-billion-word corpus, comprising English Wikipedia 2015, UMBC (web) and Gigaword (news). The downloads include embeddings learned with context window sizes of 1,5 and 10, as well as dependency-based and substitute-based contexts.
Oren Melamud, David McClosky, Siddharth Patwardhan, Mohit Bansal. The Role of Context Types and Dimensionality in Learning Word Embeddings. In Proceedings of NAACL, 2016.