2019-04-28 12:10 PM
I have made a model for imdb dataset in keras that looks like:
model = Sequential()
model.add(Embedding(known_words_num, word_vector_dim))
model.add(GRU(word_vector_dim, dropout=0.2))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
Which Embedding layer is used for making word vectors.:|