def update_weights_final(self):
# clip the gradient norm
norm = np.sqrt(np.sum(self.gradient ** 2, axis=0))
norm_check = norm > self.norm_limit
self.gradient[:, norm_check] = ((self.gradient[:, norm_check]) / norm[norm_check]) * self.norm_limit
# update weights
self.weights += self.gradient * (self.learning_rate)
# update output average for sorting weights
self.output_average *= 0.99999
self.output_average += self.output.ravel() * 0.00001
RankOrderedAutoencoder.py 文件源码
python
阅读 30
收藏 0
点赞 0
评论 0
评论列表
文章目录