optimizers.py 文件源码

python
阅读 28 收藏 0 点赞 0 评论 0

项目:odin 作者: imito 项目源码 文件源码
def get_gradients(self, loss_or_grads, params):
    """
    Note
    ----
    The returned gradients may contain None value
    """
    # check valid algorithm
    if self.algorithm is None or \
    not hasattr(self.algorithm, 'compute_gradients') or \
    not hasattr(self.algorithm, 'apply_gradients'):
      raise RuntimeError("Optimizer is None, or doesn't has attributes: "
                         "compute_gradients and apply_gradients.")
    with tf.variable_scope(self.name):
      # get the gradient
      grads_var = self.algorithm.compute_gradients(loss_or_grads,
                                                   var_list=params)
      grads_var = {g: v for g, v in grads_var if g is not None}
      grads = list(grads_var.keys())
      params = list(grads_var.values())
      # ====== clipnorm ====== #
      if self.clipnorm is not None:
        if self.clip_alg == 'norm':
          grads = [tf.clip_by_norm(g, self.clipnorm)
                   for g in grads]
        elif self.clip_alg == 'total_norm':
          grads, _ = tf.clip_by_global_norm(grads, self.clipnorm)
        elif self.clip_alg == 'avg_norm':
          grads = [tf.clip_by_average_norm(g, self.clipnorm)
                   for g in grads]
      # ====== clipvalue ====== #
      if self.clipvalue is not None:
        grads = [tf.clip_by_value(g, -self.clipvalue, self.clipvalue)
                 for g in grads]
      # ====== get final norm value ====== #
      self._norm = add_role(tf.global_norm(grads, name="GradientNorm"),
                            GradientsNorm)
      return [(g, p) for g, p in zip(grads, params)]
评论列表
文章目录


问题


面经


文章

微信
公众号

扫码关注公众号