seq2seq.py 文件源码

python
阅读 33 收藏 0 点赞 0 评论 0

项目:nlvr_tau_nlp_final_proj 作者: udiNaveh 项目源码 文件源码
def get_feed_dicts_from_sentence(sentence, sentence_placeholder, sent_lengths_placeholder, sentence_words_bow,
                                 encoder_output_tensors, learn_embeddings=False):
    """
    creates the values needed and feed-dicts that depend on the sentence.
    these feed dicts are used to run or to compute gradients.
    """

    sentence_matrix = np.stack([one_hot_dict.get(w, one_hot_dict['<UNK>']) for w in sentence.split()])
    bow_words = np.reshape(np.sum([words_array == x for x in sentence.split()], axis=0), [1, len(words_vocabulary)])

    length = [len(sentence.split())]
    encoder_feed_dict = {sentence_placeholder: sentence_matrix, sent_lengths_placeholder: length,
                         sentence_words_bow: bow_words}
    sentence_encoder_outputs = sess.run(encoder_output_tensors, feed_dict=encoder_feed_dict)
    decoder_feed_dict = {encoder_output_tensors[i]: sentence_encoder_outputs[i]
                         for i in range(len(encoder_output_tensors))}

    if not learn_embeddings:
        W_we = tf.get_default_graph().get_tensor_by_name('W_we:0')
        encoder_feed_dict = union_dicts(encoder_feed_dict, {W_we: embeddings_matrix})
    return encoder_feed_dict, decoder_feed_dict
评论列表
文章目录


问题


面经


文章

微信
公众号

扫码关注公众号