使用Tensorflow 2.0进行Logistic回归?

发布于 2021-01-29 15:21:28

我正在尝试使用TensorFlow
2.0构建多类Logistic回归,并且我编写了我认为是正确的代码,但并没有给出良好的结果。我的准确度实际上是0.1%,甚至损失也没有减少。我希望有人可以在这里帮助我。

这是我到目前为止编写的代码。请指出我在这里做错了什么,我需要改进以使我的模型正常工作。谢谢!

from tensorflow.keras.datasets import fashion_mnist
from sklearn.model_selection import train_test_split
import tensorflow as tf

(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()
x_train, x_test = x_train/255., x_test/255.

x_train, x_val, y_train, y_val = train_test_split(x_train, y_train, test_size=0.15)
x_train = tf.reshape(x_train, shape=(-1, 784))
x_test  = tf.reshape(x_test, shape=(-1, 784))

weights = tf.Variable(tf.random.normal(shape=(784, 10), dtype=tf.float64))
biases  = tf.Variable(tf.random.normal(shape=(10,), dtype=tf.float64))

def logistic_regression(x):
    lr = tf.add(tf.matmul(x, weights), biases)
    return tf.nn.sigmoid(lr)

def cross_entropy(y_true, y_pred):
    y_true = tf.one_hot(y_true, 10)
    loss = tf.nn.softmax_cross_entropy_with_logits(labels=y_true, logits=y_pred)
    return tf.reduce_mean(loss)

def accuracy(y_true, y_pred):
    y_true = tf.cast(y_true, dtype=tf.int32)
    preds = tf.cast(tf.argmax(y_pred, axis=1), dtype=tf.int32)
    preds = tf.equal(y_true, preds)
    return tf.reduce_mean(tf.cast(preds, dtype=tf.float32))

def grad(x, y):
    with tf.GradientTape() as tape:
        y_pred = logistic_regression(x)
        loss_val = cross_entropy(y, y_pred)
    return tape.gradient(loss_val, [weights, biases])

epochs = 1000
learning_rate = 0.01
batch_size = 128

dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
dataset = dataset.repeat().shuffle(x_train.shape[0]).batch(batch_size)

optimizer = tf.optimizers.SGD(learning_rate)

for epoch, (batch_xs, batch_ys) in enumerate(dataset.take(epochs), 1):
    gradients = grad(batch_xs, batch_ys)
    optimizer.apply_gradients(zip(gradients, [weights, biases]))

    y_pred = logistic_regression(batch_xs)
    loss = cross_entropy(batch_ys, y_pred)
    acc = accuracy(batch_ys, y_pred)
    print("step: %i, loss: %f, accuracy: %f" % (epoch, loss, acc))

    step: 1000, loss: 2.458979, accuracy: 0.101562
关注者
0
被浏览
64
1 个回答
  • 面试哥
    面试哥 2021-01-29
    为面试而生,有面试问题,就找面试哥。

    该模型未收敛,问题似乎出在您正在直接进行S型激活tf.nn.softmax_cross_entropy_with_logits。在文档中tf.nn.softmax_cross_entropy_with_logits说:

    警告:该操作程序期望未缩放的logit,因为它softmaxlogits内部执行on来提高效率。请勿使用的输出调用此op
    softmax,因为它将产生不正确的结果。

    因此,在传递给之前,不应在前一层的输出上执行softmax,Sigmoid,relu,tanh或任何其他激活操作tf.nn.softmax_cross_entropy_with_logits。有关何时使用S形或softmax输出激活的更多详细说明,请参见此处

    因此,通过return tf.nn.sigmoid(lr)return lrlogistic_regression函数中进行替换,模型正在收敛。

    以下是具有上述修复功能的代码的有效示例。我还将变量名更改为epochsn_batches因为您的训练循环实际上经历了1000个批次而不是1000个时期(我也将其提高到10000个,因为有迹象表明需要更多的迭代)。

    from tensorflow.keras.datasets import fashion_mnist
    from sklearn.model_selection import train_test_split
    import tensorflow as tf
    
    (x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()
    x_train, x_test = x_train/255., x_test/255.
    
    x_train, x_val, y_train, y_val = train_test_split(x_train, y_train, test_size=0.15)
    x_train = tf.reshape(x_train, shape=(-1, 784))
    x_test  = tf.reshape(x_test, shape=(-1, 784))
    
    weights = tf.Variable(tf.random.normal(shape=(784, 10), dtype=tf.float64))
    biases  = tf.Variable(tf.random.normal(shape=(10,), dtype=tf.float64))
    
    def logistic_regression(x):
        lr = tf.add(tf.matmul(x, weights), biases)
        #return tf.nn.sigmoid(lr)
        return lr
    
    
    def cross_entropy(y_true, y_pred):
        y_true = tf.one_hot(y_true, 10)
        loss = tf.nn.softmax_cross_entropy_with_logits(labels=y_true, logits=y_pred)
        return tf.reduce_mean(loss)
    
    def accuracy(y_true, y_pred):
        y_true = tf.cast(y_true, dtype=tf.int32)
        preds = tf.cast(tf.argmax(y_pred, axis=1), dtype=tf.int32)
        preds = tf.equal(y_true, preds)
        return tf.reduce_mean(tf.cast(preds, dtype=tf.float32))
    
    def grad(x, y):
        with tf.GradientTape() as tape:
            y_pred = logistic_regression(x)
            loss_val = cross_entropy(y, y_pred)
        return tape.gradient(loss_val, [weights, biases])
    
    n_batches = 10000
    learning_rate = 0.01
    batch_size = 128
    
    dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
    dataset = dataset.repeat().shuffle(x_train.shape[0]).batch(batch_size)
    
    optimizer = tf.optimizers.SGD(learning_rate)
    
    for batch_numb, (batch_xs, batch_ys) in enumerate(dataset.take(n_batches), 1):
        gradients = grad(batch_xs, batch_ys)
        optimizer.apply_gradients(zip(gradients, [weights, biases]))
    
        y_pred = logistic_regression(batch_xs)
        loss = cross_entropy(batch_ys, y_pred)
        acc = accuracy(batch_ys, y_pred)
        print("Batch number: %i, loss: %f, accuracy: %f" % (batch_numb, loss, acc))
    
    (removed printouts)
    >> Batch number: 1000, loss: 2.868473, accuracy: 0.546875
    (removed printouts)
    >> Batch number: 10000, loss: 1.482554, accuracy: 0.718750
    


知识点
面圈网VIP题库

面圈网VIP题库全新上线,海量真题题库资源。 90大类考试,超10万份考试真题开放下载啦

去下载看看