dynamic_rnn_estimator.py 文件源码

python
阅读 26 收藏 0 点赞 0 评论 0

项目:lsdc 作者: febert 项目源码 文件源码
def _construct_rnn(self, features):
    """Apply an RNN to `features`.

    The `features` dict must contain `self._inputs_key`, and the corresponding
    input should be a `Tensor` of shape `[batch_size, padded_length, k]`
    where `k` is the dimension of the input for each element of a sequence.

    `activations` has shape `[batch_size, sequence_length, n]` where `n` is
    `self._target_column.num_label_columns`. In the case of a multiclass
    classifier, `n` is the number of classes.

    `final_state` has shape determined by `self._cell` and its dtype must match
    `self._dtype`.

    Args:
      features: a `dict` containing the input for the RNN and (optionally) an
        initial state and information about sequence lengths.

    Returns:
      activations: the output of the RNN, projected to the appropriate number of
        dimensions.
      final_state: the final state output by the RNN.

    Raises:
      KeyError: if `features` does not contain `self._inputs_key`.
    """
    with ops.name_scope('RNN'):
      inputs = features.get(self._inputs_key)
      if inputs is None:
        raise KeyError('features must contain the key {}'.format(
            self._inputs_key))
      if inputs.dtype != self._dtype:
        inputs = math_ops.cast(inputs, self._dtype)
      initial_state = features.get(self._initial_state_key)
      rnn_outputs, final_state = rnn.dynamic_rnn(
          cell=self._cell,
          inputs=inputs,
          initial_state=initial_state,
          dtype=self._dtype,
          parallel_iterations=self._parallel_iterations,
          swap_memory=self._swap_memory,
          time_major=False)
      activations = layers.fully_connected(
          inputs=rnn_outputs,
          num_outputs=self._target_column.num_label_columns,
          activation_fn=None,
          trainable=False)
      return activations, final_state
评论列表
文章目录


问题


面经


文章

微信
公众号

扫码关注公众号