def batch_norm_layer(inp):
"""As explained in A. Gerón's book, in the default batch_normalization
there is no scaling, i.e. gamma is set to 1. This makes sense for layers
with no activation function or ReLU (like ours), since the next layers
weights can take care of the scaling. In other circumstances, include
scaling
"""
# get the size from input tensor (remember, 1D convolution -> input tensor 3D)
size = int(inp.shape[2])
batch_mean, batch_var = tf.nn.moments(inp,[0])
scale = tf.Variable(tf.ones([size]))
beta = tf.Variable(tf.zeros([size]))
x = tf.nn.batch_normalization(inp,batch_mean,batch_var,beta,scale,
variance_epsilon=1e-3)
return x
pretrained_word_embedding_TF_nn.py 文件源码
python
阅读 25
收藏 0
点赞 0
评论 0
评论列表
文章目录