def conv2d_v2(inputs, n_output_channels, is_training, reuse, **kwargs):
"""Adds a 2D dilated convolutional layer
also known as convolution with holes or atrous convolution.
If the rate parameter is equal to one, it performs regular 2-D convolution.
If the rate parameter
is greater than one, it performs convolution with holes, sampling the input
values every rate pixels in the height and width dimensions.
`convolutional layer` creates a variable called `weights`, representing a conv
weight matrix, which is multiplied by the `x` to produce a
`Tensor` of hidden units. If a `batch_norm` is provided (such as
`batch_norm`), it is then applied. Otherwise, if `batch_norm` is
None and a `b_init` and `use_bias` is provided then a `biases` variable would be
created and added the hidden units. Finally, if `activation` is not `None`,
it is applied to the hidden units as well.
Note: that if `x` have a rank 4
Args:
x: A 4-D `Tensor` of with rank 4 and value for the last dimension,
i.e. `[batch_size, in_height, in_width, depth]`,
is_training: Bool, training or testing
n_output: Integer or long, the number of output units in the layer.
reuse: whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
filter_size: a int or list/tuple of 2 positive integers specifying the spatial
dimensions of of the filters.
dilation: A positive int32. The stride with which we sample input values across
the height and width dimensions. Equivalently, the rate by which we upsample the
filter values by inserting zeros across the height and width dimensions. In the literature,
the same parameter is sometimes called input stride/rate or dilation.
padding: one of `"VALID"` or `"SAME"`. IF padding is LEFT, it preprocess the input to use Valid padding
activation: activation function, set to None to skip it and maintain
a linear activation.
batch_norm: normalization function to use. If
`batch_norm` is `True` then google original implementation is used and
if another function is provided then it is applied.
default set to None for no normalizer function
batch_norm_args: normalization function parameters.
w_init: An initializer for the weights.
w_regularizer: Optional regularizer for the weights.
untie_biases: spatial dimensions wise baises
b_init: An initializer for the biases. If None skip biases.
outputs_collections: The collections to which the outputs are added.
trainable: If `True` also add variables to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).
name: Optional name or scope for variable_scope/name_scope.
use_bias: Whether to add bias or not
Returns:
The 4-D `Tensor` variable representing the result of the series of operations.
e.g.: 4-D `Tensor` [batch, new_height, new_width, n_output].
Raises:
ValueError: if x has rank less than 4 or if its last dimension is not set.
"""
if 'padding' in kwargs and kwargs['padding'] == 'LEFT':
inputs, kwargs = format_input_left_padding(inputs, **kwargs)
return dilated_conv2d(inputs, n_output_channels, is_training, reuse, **kwargs)
评论列表
文章目录