Before we describe our architecture for learning bilingual word embeddings (BWEs) from comparable data, we provide a short overview of the underlying skip-gram word representation learning model in monolingual settings. Our new bilingual representation learning model called BWESG is an extension of skip-gram (SG) to multilingual settings with multilingual comparable training data, and serves as the basis of our novel IR/CLIR framework introduced later in sect. 3.2.