ann_creation_helper.py 文件源码

python
阅读 22 收藏 0 点赞 0 评论 0

项目:ChessAI 作者: SamRagusa 项目源码 文件源码
def build_inception_module_with_batch_norm(the_input, module, mode, activation_summaries=[], num_previously_built_inception_modules=0, padding='same', force_no_concat=False):
    """
    NOTE:
    1) This comment no longer fully describes the functionality of the function.  It will be updated in the near future
    when I have a bit more time to focus on this type of stuff.

    Builds an inception module based on the design given to the function.  It returns the final layer in the module,
    and the activation summaries generated for the layers within the inception module.

    The layers will be named "module_N_path_M/layer_P", where N is the inception module number, M is what path number it is on,
    and P is what number layer it is in that path.

    Module of the format:
    [[[filters1_1,kernal_size1_1],... , [filters1_M,kernal_size1_M]],... ,
        [filtersN_1,kernal_sizeN_1],... , [filtersN_P,kernal_sizeN_P]]
    """
    path_outputs = [None for _ in range(len(module))]
    to_summarize = []
    cur_input = None
    for j, path in enumerate(module):
        with tf.variable_scope("inception_module_" + str(num_previously_built_inception_modules + 1) + "_path_" + str(j + 1)):
            for i, section in enumerate(path):
                if i == 0:
                    if j != 0:
                        path_outputs[j - 1] = cur_input

                    cur_input = the_input
                kernel_size = [section[1], section[1]] if len(section)==2 else [section[1], section[2]]
                cur_conv_output = tf.layers.conv2d(
                    inputs=cur_input,
                    filters=section[0],
                    kernel_size=kernel_size,
                    padding=padding,
                    use_bias=False,
                    kernel_initializer=layers.xavier_initializer(),
                    name="layer_" + str(i + 1))

                cur_batch_normalized = tf.layers.batch_normalization(cur_conv_output,
                                                                     training=(mode == tf.estimator.ModeKeys.TRAIN),
                                                                     fused=True)

                cur_input = tf.nn.relu(cur_batch_normalized)

                to_summarize.append(cur_input)

    path_outputs[-1] = cur_input

    activation_summaries = activation_summaries + [layers.summarize_activation(layer) for layer in to_summarize]

    with tf.variable_scope("inception_module_" + str(num_previously_built_inception_modules + 1)):
        for j in range(1, len(path_outputs)):
            if force_no_concat or path_outputs[0].get_shape().as_list()[1:3] != path_outputs[j].get_shape().as_list()[1:3]:
                return [temp_input for temp_input in path_outputs], activation_summaries

        return tf.concat([temp_input for temp_input in path_outputs], 3), activation_summaries
评论列表
文章目录


问题


面经


文章

微信
公众号

扫码关注公众号