Convolutional auto encoders for sentence representation generation

Authors: ALİ MERT CEYLAN, VECDİ AYTAÇ

Abstract: In this study, we have proposed an alternative approach for sentence modeling problem. The difficulty of the choice of answer, the semantically related questions and the lack of syntactic closeness of the answers give rise to the difficulty of selecting the answer. The deep learning field has recently achieved a pivotal success in semantic analysis, machine translation, and text summaries. The essence of this work, inspired by the human orthographic processing mechanism and using multiple convolution filters with pre-rendered 2-Dimension (2D) representations of sentences, input or output size is to learn the basic features of the language without concerns. For this reason, the semantic relations in the sentence structure are learned by the convolutional variational auto-encoders first, and then the question and answer spaces learned by the auto-encoders are linked with proposed intermediate models. We have benchmarked five variations of our proposed model, which is based on Variational Auto-Encoder with multiple latent spaces and able to achieve lower error rates than the baseline model, which is the base Convolutional LSTM.

Keywords: Convolutional networks, bi-gram, n-gram, question answering problem, deep learning, variational autoencoder, sentence modeling

Full Text: PDF