stack_lstm_attn_model2.py 文件源码

python
阅读 16 收藏 0 点赞 0 评论 0

项目:kaggle-quora-solution-8th 作者: qqgeogor 项目源码 文件源码
def baseline():
    embedding_layer = Embedding(nb_words,
        EMBEDDING_DIM,
        weights=[embedding_matrix],
        input_length=MAX_SEQUENCE_LENGTH,
        trainable=False)
    lstm_layer = LSTM(num_lstm, dropout=rate_drop_lstm, recurrent_dropout=rate_drop_lstm,return_sequences=True)

    sequence_1_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32')
    embedded_sequences_1 = embedding_layer(sequence_1_input)
    x1 = lstm_layer(embedded_sequences_1)
    x1 = Attention(MAX_SEQUENCE_LENGTH)(x1)
    sequence_2_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32')
    embedded_sequences_2 = embedding_layer(sequence_2_input)
    y1 = lstm_layer(embedded_sequences_2)
    y1 = Attention(MAX_SEQUENCE_LENGTH)(y1)

    merged = concatenate([x1, y1])
    merged = Dropout(rate_drop_dense)(merged)
    merged = BatchNormalization()(merged)

    merged = Dense(num_dense, activation=act)(merged)
    merged = Dropout(rate_drop_dense)(merged)
    merged = BatchNormalization()(merged)

    preds = Dense(1, activation='sigmoid')(merged)


    model = Model(inputs=[sequence_1_input, sequence_2_input], \
        outputs=preds)
    model.compile(loss='binary_crossentropy',
        optimizer='nadam',
        metrics=['acc'])

    return model
评论列表
文章目录


问题


面经


文章

微信
公众号

扫码关注公众号