sparse_td_net.py 文件源码

python
阅读 21 收藏 0 点赞 0 评论 0

项目:pdnn 作者: petered 项目源码 文件源码
def sparse_temporal_forward_pass(inputs, weights, biases = None, scales = None, hidden_activations='relu', output_activations = 'relu', quantization_method = 'herd', rng=None):
    """
    Feed a sequence of inputs into a sparse temporal difference net and get the resulting activations.

    :param inputs: A (n_frames, n_dims_in) array
    :param weights: A list of (n_dim_in, n_dim_out) weight matrices
    :param biases: An optional (len(weights)) list of (w.shape[1]) biases for each weight matrix
    :param scales: An optional (len(weights)) list of (w.shape[0]) scales to scale each layer before rounding.
    :param hidden_activations: Indicates the hidden layer activation function
    :param output_activations: Indicates the output layer activation function
    :return: activations:
        A len(weights)*3+1 list of (n_frames, n_dims) activations.
        Elements [::3] will be a length(w)+1 list containing the input to each rounding unit, and the final output
        Elements [1::3] will be the length(w) rounded "spike" signal.
        Elements [2::3] will be the length(w) inputs to each nonlinearity
    """
    activations = [inputs]
    if biases is None:
        biases = [0]*len(weights)
    if scales is None:
        scales = [1.]*len(weights)
    else:
        assert len(scales) in (len(weights), len(weights)+1)
    real_activations = inputs
    for w, b, k in zip(weights, biases, scales):
        deltas = np.diff(np.insert(real_activations, 0, 0, axis=0), axis=0)  # (n_steps, n_in)
        spikes = quantize_sequence(k*deltas, method=quantization_method, rng=rng)  # (n_steps, n_in)
        delta_inputs = (spikes/k).dot(w)  # (n_steps, n_out)
        cumulated_inputs = np.cumsum(delta_inputs, axis=0)+b  # (n_steps, n_out)
        real_activations = activation_function(cumulated_inputs, output_activations if w is weights[-1] else hidden_activations)  # (n_steps, n_out)
        activations += [spikes, cumulated_inputs, real_activations]
    if len(scales)==len(weights)+1:
        activations[-1]*=scales[-1]
    return activations
评论列表
文章目录


问题


面经


文章

微信
公众号

扫码关注公众号