test_temporal_data_tasks.py 文件源码

python
阅读 31 收藏 0 点赞 0 评论 0

项目:keras 作者: GeekLiB 项目源码 文件源码
def test_masked_temporal():
    '''
    Confirm that even with masking on both inputs and outputs, cross-entropies are
    of the expected scale.

    In this task, there are variable length inputs of integers from 1-9, and a random
    subset of unmasked outputs. Each of these outputs has a 50% probability of being
    the input number unchanged, and a 50% probability of being 2*input%10.

    The ground-truth best cross-entropy loss should, then be -log(0.5) = 0.69

    '''
    model = Sequential()
    model.add(Embedding(10, 20, mask_zero=True, input_length=20))
    model.add(TimeDistributedDense(10))
    model.add(Activation('softmax'))
    model.compile(loss='categorical_crossentropy',
                  optimizer='adam',
                  sample_weight_mode='temporal')

    X = np.random.random_integers(1, 9, (50000, 20))
    for rowi in range(X.shape[0]):
        padding = np.random.random_integers(X.shape[1] / 2)
        X[rowi, :padding] = 0

    # 50% of the time the correct output is the input.
    # The other 50% of the time it's 2 * input % 10
    y = (X * np.random.random_integers(1, 2, X.shape)) % 10
    Y = np.zeros((y.size, 10), dtype='int32')
    for i, target in enumerate(y.flat):
        Y[i, target] = 1
    Y = Y.reshape(y.shape + (10,))

    # Mask 50% of the outputs via sample weights
    sample_weight = np.random.random_integers(0, 1, y.shape)
    print('X shape:', X.shape)
    print('Y shape:', Y.shape)
    print('sample_weight shape:', Y.shape)

    history = model.fit(X, Y, validation_split=0.05,
                        sample_weight=None,
                        verbose=1, nb_epoch=2)
    ground_truth = -np.log(0.5)
    assert(np.abs(history.history['val_loss'][-1] - ground_truth) < 0.06)
评论列表
文章目录


问题


面经


文章

微信
公众号

扫码关注公众号