next_step.py 文件源码

python
阅读 22 收藏 0 点赞 0 评论 0

项目:generating_sequences 作者: PFCM 项目源码 文件源码
def standard_nextstep_inference(cell, inputs, output_size, scope=None,
                                return_states=False):
    """Gets the forward pass of a standard model for next step prediction.

    Args:
        cell (tf.nn.rnn_cell.RNNCell): the cell to use at each step. For
            multiple layers, wrap them up in tf.nn.rnn_cell.MultiRNNCell.
        inputs: [sequence_length, batch_size, num_features] float input tensor.
            If the inputs are discrete, we expect this to already be embedded.
        output_size (int): the size we project the output to (ie. the number of
            symbols in the vocabulary).
        scope (Optional): variable scope, defaults to "rnn".
        return_states (Optional[bool]): whether or not to return the states
            as well as the projected logits.

    Returns:
        tuple: (initial_state, logits, final_state)
            - *initial_state* are float tensors representing the first state of
                the rnn, to pass states across batches.
            - *logits* is a list of [batch_size, vocab_size] float tensors
                representing the output of the network at each timestep.
            - *final_state* are float tensors containing the final state of
                the network, evaluate these to get a state to feed into
                initial_state next batch.
    """
    with tf.variable_scope(scope or "rnn") as scope:
        inputs = tf.unpack(inputs)
        batch_size = inputs[0].get_shape()[0].value
        initial_state = cell.zero_state(batch_size, tf.float32)

        states, final_state = tf.nn.rnn(
            cell, inputs, initial_state=initial_state, dtype=tf.float32,
            scope=scope)

        # we want to project them all at once, because it's faster
        logits = tf.concat(0, states)

        with tf.variable_scope('output_layer'):
            # output layer needs weights
            weights = tf.get_variable('weights',
                                      [cell.output_size, output_size])
            biases = tf.get_variable('bias',
                                     [output_size])
            logits = tf.nn.bias_add(tf.matmul(logits, weights), biases)
        # and split them back up to what we expect
        logits = tf.split(0, len(inputs), logits)

        if return_states:
            logits = (states, logits)

    return [initial_state], logits, [final_state]
评论列表
文章目录


问题


面经


文章

微信
公众号

扫码关注公众号