rnn_classify.py 文件源码

python
阅读 20 收藏 0 点赞 0 评论 0

项目:DeepLearning 作者: STHSF 项目源码 文件源码
def rnn(input_data, weights, biases):
    # hidden layer for input to cell
    ########################################

    # transpose the inputs shape from
    # X?128 batch ,28 steps, 18 inputs?
    # ==> (128 batch * 28 steps, 28 inputs)
    input_data = tf.reshape(input_data, [-1, n_inputs])

    # into hidden
    # data_in = (128 batch * 28 steps, 128 hidden)
    data_in = tf.matmul(input_data, weights['in']) + biases['in']
    # data_in ==> (128 batch, 28 steps, 128 hidden_units)
    data_in = tf.reshape(data_in, [-1, n_steps, n_hidden_units])

    # cell
    ##########################################

    # basic LSTM Cell.
    lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(n_hidden_units, forget_bias=1.0, state_is_tuple=True)
    # lstm cell is divided into two parts (c_state, h_state)
    _init_state = lstm_cell.zero_state(batch_size, dtype=tf.float32)

    # You have 2 options for following step.
    # 1: tf.nn.rnn(cell, inputs);
    # 2: tf.nn.dynamic_rnn(cell, inputs).
    # If use option 1, you have to modified the shape of data_in, go and check out this:
    # https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/recurrent_network.py
    # In here, we go for option 2.
    # dynamic_rnn receive Tensor (batch, steps, inputs) or (steps, batch, inputs) as data_in.
    # Make sure the time_major is changed accordingly.
    outputs, final_state = tf.nn.dynamic_rnn(lstm_cell, data_in, initial_state=_init_state, time_major=False)

    # hidden layer for output as the final results
    #############################################
    # results = tf.matmul(final_state[1], weights_1['out']) + biases_1['out']

    # # or
    # unpack to list [(batch, outputs)..] * steps
    outputs = tf.unpack(tf.transpose(outputs, [1, 0, 2]))    # states is the last outputs
    results = tf.matmul(outputs[-1], weights['out']) + biases['out']

    return results
评论列表
文章目录


问题


面经


文章

微信
公众号

扫码关注公众号