C2_W2_Assignment_吴恩达_中英_Pytorch

Neural Networks for Handwritten Digit Recognition, Multiclass

In this exercise, you will use a neural network to recognize the hand-written digits 0-9.
在本次练习中,您将使用神经网络来识别0-9的手写数字。

Outline

  • 1 - Packages
  • 2 - ReLU Activation
  • 3 - Softmax Function
    • Exercise 1
  • 4 - Neural Networks
    • 4.1 Problem Statement
    • 4.2 Dataset
    • 4.3 Model representation
    • 4.4 Tensorflow Model Implementation
    • 4.5 Softmax placement
      • Exercise 2

1 - Packages

First, let’s run the cell below to import all the packages that you will need during this assignment.
首先,运行下面的单元格来导入你在这个练习中需要的所有包。

  • numpy is the fundamental package for scientific computing with Python.
  • matplotlib is a popular library to plot graphs in Python.
  • tensorflow a popular platform for machine learning.
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.activations import linear, relu, sigmoid
%matplotlib widget
import matplotlib.pyplot as plt
plt.style.use('./deeplearning.mplstyle')

import logging
logging.getLogger("tensorflow").setLevel(logging.ERROR)
tf.autograph.set_verbosity(0)

from public_tests import * 

from autils import *
from lab_utils_softmax import plt_softmax
np.set_printoptions(precision=2)

2 - ReLU Activation

This week, a new activation was introduced, the Rectified Linear Unit (ReLU).
本周,一个新的激活函数被引入,即整流线性单元(ReLU)。

a = m a x ( 0 , z ) ReLU function a = max(0,z) \quad\quad\text {ReLU function} a=max(0,z)ReLU function

plt_act_trio()

在这里插入图片描述

在这里插入图片描述

The example from the lecture on the right shows an application of the ReLU. In this example, the derived “awareness” feature is not binary but has a continuous range of values. The sigmoid is best for on/off or binary situations. The ReLU provides a continuous linear relationship. Additionally it has an ‘off’ range where the output is zero.
右边的例子展示了ReLU的一个应用。在这个例子中,派生的“意识”特征不是二元的,而是具有连续范围的值。s型最适合开/关或二进制的情况。ReLU提供了一个连续的线性关系。此外,它有一个输出为零的“关闭”范围

The “off” feature makes the ReLU a Non-Linear activation. Why is this needed? This enables multiple units to contribute to to the resulting function without interfering. This is examined more in the supporting optional lab.
“关闭”功能使ReLU成为非线性激活。为什么需要这样做?这使得多个单元能够在不干扰的情况下为最终功能做出贡献。在配套的可选实验室中对此进行了更多的研究

3 - Softmax Function

A multiclass neural network generates N outputs. One output is selected as the predicted answer. In the output layer, a vector z \mathbf{z} z is generated by a linear function which is fed into a softmax function. The softmax function converts z \mathbf{z} z into a probability distribution as described below. After applying softmax, each output will be between 0 and 1 and the outputs will sum to 1. They can be interpreted as probabilities. The larger inputs to the softmax will correspond to larger output probabilities.
一个多类神经网络产生N个输出。选择一个输出作为预测答案。在输出层,向量 z \mathbf{z} z是由一个线性函数生成的,该线性函数被馈送到一个softmax函数中。softmax函数将 z \mathbf{z} z转换为如下所述的概率分布。应用softmax后,每个输出将在0到1之间,输出之和为1。它们可以被解释为概率。softmax的较大输入将对应较大的输出概率
在这里插入图片描述

The softmax function can be written:
a j = e z j ∑ k = 0 N − 1 e z k (1) a_j = \frac{e^{z_j}}{ \sum_{k=0}^{N-1}{e^{z_k} }} \tag{1} aj=k=0N1ezkezj(1)

Where z = w ⋅ x + b z = \mathbf{w} \cdot \mathbf{x} + b z=wx+b and N is the number of feature/categories in the output layer.

Exercise 1

Let’s create a NumPy implementation:

# UNQ_C1
# GRADED CELL: my_softmax

def my_softmax(z):  
    """ Softmax converts a vector of values to a probability distribution.
    Args:
      z (ndarray (N,))  : input data, N features
    Returns:
      a (ndarray (N,))  : softmax of z
    """    
    ### START CODE HERE ### 
    ez = np.exp(z)
    a = ez/np.sum(ez)
    ### END CODE HERE ### 
    return a
z = np.array([1., 2., 3., 4.])
a = my_softmax(z)
atf = tf.nn.softmax(z)
print(f"my_softmax(z):         {a}")
print(f"tensorflow softmax(z): {atf}")

# BEGIN UNIT TEST  
test_my_softmax(my_softmax)
# END UNIT TEST  
my_softmax(z):         [0.03 0.09 0.24 0.64]
tensorflow softmax(z): [0.03 0.09 0.24 0.64]
[92m All tests passed.
Click for hints One implementation uses for loop to first build the denominator and then a second loop to calculate each output.
def my_softmax(z):  
    N = len(z)
    a =                     # initialize a to zeros 
    ez_sum =                # initialize sum to zero
    for k in range(N):      # loop over number of outputs             
        ez_sum +=           # sum exp(z[k]) to build the shared denominator      
    for j in range(N):      # loop over number of outputs again                
        a[j] =              # divide each the exp of each output by the denominator   
    return(a)
Click for code
def my_softmax(z):  
    N = len(z)
    a = np.zeros(N)
    ez_sum = 0
    for k in range(N):                
        ez_sum += np.exp(z[k])       
    for j in range(N):                
        a[j] = np.exp(z[j])/ez_sum   
    return(a)

Or, a vector implementation:

def my_softmax(z):  
    ez = np.exp(z)              
    a = ez/np.sum(ez)           
    return(a)

Below, vary the values of the z inputs. Note in particular how the exponential in the numerator magnifies small differences in the values. Note as well that the output values sum to one.
下面,改变“z”输入的值。特别要注意分子中的指数如何放大值中的微小差异。还要注意,输出值和为1。

plt.close("all")
plt_softmax(my_softmax)

在这里插入图片描述

4 - Neural Networks

In last weeks assignment, you implemented a neural network to do binary classification. This week you will extend that to multiclass classification. This will utilize the softmax activation.
在上周的作业中,你们实现了一个神经网络来进行二值分类。本周你将把它扩展到多类分类。这将利用softmax激活。

4.1 Problem Statement

In this exercise, you will use a neural network to recognize ten handwritten digits, 0-9. This is a multiclass classification task where one of n choices is selected. Automated handwritten digit recognition is widely used today - from recognizing zip codes (postal codes) on mail envelopes to recognizing amounts written on bank checks.
在这个练习中,你将使用神经网络来识别10个手写数字,0-9。这是一个从n个选项中选择一个的多类分类任务。自动手写数字识别在今天被广泛使用——从识别邮件信封上的邮政编码到识别银行支票上的金额。

4.2 Dataset

You will start by loading the dataset for this task.
一开始你将为这个任务加载数据集。

  • The load_data() function shown below loads the data into variables X and y(load_data()函数将数据加载到变量Xy中)

  • The data set contains 5000 training examples of handwritten digits 1 ^1 1. (数据集包含5000个手写数据的训练例子)

    • Each training example is a 20-pixel x 20-pixel grayscale image of the digit.(每个训练样例是数字的20像素x 20像素灰度图像)
      • Each pixel is represented by a floating-point number indicating the grayscale intensity at that location.(每个像素用一个浮点数表示,表示该位置的灰度强度。)
      • The 20 by 20 grid of pixels is “unrolled” into a 400-dimensional vector.(20 × 20的像素网格被“展开”成一个400维向量。)
      • Each training examples becomes a single row in our data matrix X.(每个训练样例都成为数据矩阵X中的一行。)
      • This gives us a 5000 x 400 matrix X where every row is a training example of a handwritten digit image.(这给了我们一个5000 x 400矩阵x,其中每一行都是一个手写数字图像的训练示例。)

X = ( − − − ( x ( 1 ) ) − − − − − − ( x ( 2 ) ) − − − ⋮ − − − ( x ( m ) ) − − − ) X = \left(\begin{array}{cc} --- (x^{(1)}) --- \\ --- (x^{(2)}) --- \\ \vdots \\ --- (x^{(m)}) --- \end{array}\right) X= (x(1))(x(2))(x(m))

  • The second part of the training set is a 5000 x 1 dimensional vector y that contains labels for the training set(训练集的第二部分是一个5000 x 1维向量“y”,其中包含训练集的标签)
    • y = 0 if the image is of the digit 0, y = 4 if the image is of the digit 4 and so on.(如果图像为数字0,则为y = 0,如果图像为数字4,则为y = 4,以此类推。)

1 ^1 1 This is a subset of the MNIST handwritten digit dataset(这是MNIST手写数字数据集的一个子集) (http://yann.lecun.com/exdb/mnist/)

# load dataset
X, y = load_data()
4.2.1 View the variables

Let’s get more familiar with your dataset.(让我们更熟悉你的数据集。)

  • A good place to start is to print out each variable and see what it contains.
    • 一个好的开始是打印出每个变量,看看它包含什么。

The code below prints the first element in the variables X and y.
下面的代码打印变量Xy中的第一个元素。

print ('The first element of X is: ', X[0])
The first element of X is:  [ 0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00
  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00
  ...
  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00  0.00e+00
  0.00e+00]
print ('The first element of y is: ', y[0,0])
print ('The last element of y is: ', y[-1,0])
The first element of y is:  0
The last element of y is:  9
4.2.2 Check the dimensions of your variables(检查变量的大小)

Another way to get familiar with your data is to view its dimensions. Please print the shape of X and y and see how many training examples you have in your dataset.
另一种熟悉你的数据的方法是查看它的维度。请打印出Xy的形状,并查看你数据集中的训练示例数量。

print ('The shape of X is: ' + str(X.shape))
print ('The shape of y is: ' + str(y.shape))
The shape of X is: (5000, 400)
The shape of y is: (5000, 1)
4.2.3 Visualizing the Data(可视化数据)

You will begin by visualizing a subset of the training set.(你将从可视化训练集中的一小部分开始。)

  • In the cell below, the code randomly selects 64 rows from X, maps each row back to a 20 pixel by 20 pixel grayscale image and displays the images together.
    • 下面的代码从X中随机选择64行,将每一行映射回20像素x20像素的灰度图像,并显示图像。
  • The label for each image is displayed above the image
    • 每个图像的标签都显示在图像上方
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
# You do not need to modify anything in this cell

m, n = X.shape

fig, axes = plt.subplots(8,8, figsize=(5,5))
fig.tight_layout(pad=0.13,rect=[0, 0.03, 1, 0.91]) #[left, bottom, right, top]

#fig.tight_layout(pad=0.5)
widgvis(fig)
for i,ax in enumerate(axes.flat):
    # Select random indices
    random_index = np.random.randint(m)
    
    # Select rows corresponding to the random indices and
    # reshape the image
    X_random_reshaped = X[random_index].reshape((20,20)).T
    
    # Display the image
    ax.imshow(X_random_reshaped, cmap='gray')
    
    # Display the label above the image
    ax.set_title(y[random_index,0])
    ax.set_axis_off()
    fig.suptitle("Label, image", fontsize=14)

在这里插入图片描述

4.3 Model representation(模型表示)

The neural network you will use in this assignment is shown in the figure below.
你将在这个作业中使用的神经网络如下图所示。

  • This has two dense layers with ReLU activations followed by an output layer with a linear activation.(它有两个具有ReLU激活的密集层,后面是一个具有线性激活的输出层。)
    • Recall that our inputs are pixel values of digit images.(回想一下,我们的输入是数字图像的像素值。)
    • Since the images are of size 20 × 20 20\times20 20×20, this gives us 400 400 400 inputs(由于图像的大小为 20 × 20 20 × 20 20×20,因此我们得到 400 400 400的输入)

在这里插入图片描述

  • The parameters have dimensions that are sized for a neural network with 25 25 25 units in layer 1, 15 15 15 units in layer 2 and 10 10 10 output units in layer 3, one for each digit.
    • 参数的维度是神经网络的大小,第一层为 25 25 25单位,第二层为 15 15 15单位,第三层为 10 10 10输出单位,每个数字一个

    • Recall that the dimensions of these parameters is determined as follows:(记住,这些参数的维度是按照以下方式确定的:)

      • If network has s i n s_{in} sin units in a layer and s o u t s_{out} sout units in the next layer, then(如果网络在层中有 s i n s_{in} sin个单元,在下一层有 s o u t s_{out} sout个单元,则)
        • W W W will be of dimension s i n × s o u t s_{in} \times s_{out} sin×sout.( W W W的维度为 s i n × s o u t s_{in} \times s_{out} sin×sout)
        • b b b will be a vector with s o u t s_{out} sout elements( b b b将是一个具有 s o u t s_{out} sout个元素的向量)
    • Therefore, the shapes of W, and b, are(因此,Wb的形状是)

      • layer1: The shape of W1 is (400, 25) and the shape of b1 is (25,)
      • layer2: The shape of W2 is (25, 15) and the shape of b2 is: (15,)
      • layer3: The shape of W3 is (15, 10) and the shape of b3 is: (10,)

Note: The bias vector b could be represented as a 1-D (n,) or 2-D (n,1) array. Tensorflow utilizes a 1-D representation and this lab will maintain that convention:
注意: 偏差向量b可以表示为1-D(n,)或2-D(n,1)数组。Tensorflow使用1-D表示法,本实验将保持这种惯例:

4.4 Tensorflow Model Implementation(Tensorflow模型实现)

Tensorflow models are built layer by layer. A layer’s input dimensions ( s i n s_{in} sin above) are calculated for you. You specify a layer’s output dimensions and this determines the next layer’s input dimension. The input dimension of the first layer is derived from the size of the input data specified in the model.fit statement below.
Tensorflow模型是按层构建的。上面的 s i n s_{in} sin的输入尺寸是为你计算的。你指定一个层的输出尺寸,这决定了下一层的输入尺寸。第一层的输入尺寸由在下面的model.fit语句中指定的输入数据的大小决定。

Note: It is also possible to add an input layer that specifies the input dimension of the first layer. For example:
注意: 也可以添加一个输入层,该层指定第一层的输入尺寸。例如:
tf.keras.Input(shape=(400,)), #specify input shape
We will include that here to illuminate some model sizing.
我们将在这里包括它来说明一些模型大小。

4.5 Softmax placement(Softmax放置)

As described in the lecture and the optional softmax lab, numerical stability is improved if the softmax is grouped with the loss function rather than the output layer during training. This has implications when building the model and using the model.
正如在讲座和可选的softmax实验室中所描述的,如果softmax在训练期间与损失函数一起而不是输出层分组,则数值稳定性会得到改善。这在“构建”模型和“使用”模型时具有隐含意义。
Building:

  • The final Dense layer should use a ‘linear’ activation. This is effectively no activation.
    • 最后的致密层应该使用“线性”激活。这实际上是没有激活。
  • The model.compile statement will indicate this by including from_logits=True.(model.compile语句将通过包含from_logits=True来表明这一点。)
    loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
  • This does not impact the form of the target. In the case of SparseCategorialCrossentropy, the target is the expected digit, 0-9.
    • 这不会影响目标的形状。在SparseCategorialCrossentropy的情况下,目标是期望的数字0-9。

Using the model(运用该模型):

  • The outputs are not probabilities. If output probabilities are desired, apply a softmax function.
    • 输出不是概率。如果需要输出概率,则应用softmax函数。

Exercise 2

Below, using Keras Sequential model and Dense Layer with a ReLU activation to construct the three layer network described above.
下面,使用Keras Sequential model和Dense Layer与ReLU激活来构建上面描述的三层网络。

# UNQ_C2
# GRADED CELL: Sequential model
tf.random.set_seed(1234) # for consistent results
model = Sequential(
    [               
        ### START CODE HERE ### 
        tf.keras.Input(shape=(400,)),
        Dense(25,activation='relu'),
        Dense(15,activation='relu'),
        Dense(10,activation='linear')
        ### END CODE HERE ### 
    ], name = "my_model" 
)
model.summary()
Model: "my_model"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                (None, 25)                10025     
_________________________________________________________________
dense_1 (Dense)              (None, 15)                390       
_________________________________________________________________
dense_2 (Dense)              (None, 10)                160       
=================================================================
Total params: 10,575
Trainable params: 10,575
Non-trainable params: 0
_________________________________________________________________
Expected Output (Click to expand) The `model.summary()` function displays a useful summary of the model. Note, the names of the layers may vary as they are auto-generated unless the name is specified.
Model: "my_model"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
L1 (Dense)                   (None, 25)                10025     
_________________________________________________________________
L2 (Dense)                   (None, 15)                390       
_________________________________________________________________
L3 (Dense)                   (None, 10)                160       
=================================================================
Total params: 10,575
Trainable params: 10,575
Non-trainable params: 0
_________________________________________________________________
Click for hints
tf.random.set_seed(1234)
model = Sequential(
    [               
        ### START CODE HERE ### 
        tf.keras.Input(shape=(400,)),     # @REPLACE 
        Dense(25, activation='relu', name = "L1"), # @REPLACE 
        Dense(15, activation='relu',  name = "L2"), # @REPLACE  
        Dense(10, activation='linear', name = "L3"),  # @REPLACE 
        ### END CODE HERE ### 
    ], name = "my_model" 
)
# BEGIN UNIT TEST     
test_model(model, 10, 400)
# END UNIT TEST     
[92mAll tests passed!

The parameter counts shown in the summary correspond to the number of elements in the weight and bias arrays as shown below.
摘要中显示的参数计数对应于权重和偏置数组中的元素数量,如下所示。

Let’s further examine the weights to verify that tensorflow produced the same dimensions as we calculated above.
让我们进一步检查权重,以验证tensorflow产生的维度与我们上面计算的相同。

[layer1, layer2, layer3] = model.layers
#### Examine Weights shapes
W1,b1 = layer1.get_weights()
W2,b2 = layer2.get_weights()
W3,b3 = layer3.get_weights()
print(f"W1 shape = {W1.shape}, b1 shape = {b1.shape}")
print(f"W2 shape = {W2.shape}, b2 shape = {b2.shape}")
print(f"W3 shape = {W3.shape}, b3 shape = {b3.shape}")
W1 shape = (400, 25), b1 shape = (25,)
W2 shape = (25, 15), b2 shape = (15,)
W3 shape = (15, 10), b3 shape = (10,)

Expected Output

W1 shape = (400, 25), b1 shape = (25,)  
W2 shape = (25, 15), b2 shape = (15,)  
W3 shape = (15, 1), b3 shape = (10,)

The following code:

  • defines a loss function, SparseCategoricalCrossentropy and indicates the softmax should be included with the loss calculation by adding (from_logits=True)
    • 定义一个损失函数,SparseCategoricalCrossentropy,通过添加(from_logits=True)来指示损失计算中包含softmax
  • defines an optimizer. A popular choice is Adaptive Moment (Adam) which was described in lecture.
    • 定义一个优化器。一个流行的选择是自适应矩(Adam),这在讲座中描述过。
model.compile(
    loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
    optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
)

history = model.fit(
    X,y,
    epochs=40
)
Epoch 1/40
157/157 [==============================] - 2s 1ms/step - loss: 1.7107
Epoch 2/40
157/157 [==============================] - 0s 1ms/step - loss: 0.7461
...
Epoch 40/40
157/157 [==============================] - 0s 847us/step - loss: 0.0329
Epochs and batches

In the compile statement above, the number of epochs was set to 100. This specifies that the entire data set should be applied during training 100 times. During training, you see output describing the progress of training that looks like this:
在上面的’ compile ‘语句中,’ epoch '的数目被设置为100。这指定整个数据集应该在训练期间应用100次。在训练过程中,你会看到描述训练进度的输出,如下所示:

Epoch 1/100
157/157 [==============================] - 0s 1ms/step - loss: 2.2770

The first line, Epoch 1/100, describes which epoch the model is currently running. For efficiency, the training data set is broken into ‘batches’. The default size of a batch in Tensorflow is 32. There are 5000 examples in our data set or roughly 157 batches. The notation on the 2nd line 157/157 [==== is describing which batch has been executed.
第一行“Epoch 1/100”描述了模型当前运行的Epoch。为了提高效率,训练数据集被分成“批次”。Tensorflow中批处理的默认大小是32。我们的数据集中有5000个例子,大约157批。第二行’ 157/157[====]的符号描述了执行了哪个批处理。

Loss (cost)

In course 1, we learned to track the progress of gradient descent by monitoring the cost. Ideally, the cost will decrease as the number of iterations of the algorithm increases. Tensorflow refers to the cost as loss. Above, you saw the loss displayed each epoch as model.fit was executing. The .fit method returns a variety of metrics including the loss. This is captured in the history variable above. This can be used to examine the loss in a plot as shown below.
在课程1中,我们学习了通过监测cost来跟踪梯度下降的进度。理想情况下,成本会随着算法迭代次数的增加而降低。Tensorflow将成本称为“损失”。在上面,您可以看到每个epoch的损失显示为“模型”。他在执行死刑。.fit方法返回各种指标,包括损失。这是在上面的“history”变量中捕获的。这可以用来检查如下图所示的损失。

plot_loss_tf(history)

在这里插入图片描述

Prediction

To make a prediction, use Keras predict. Below, X[1015] contains an image of a two.
要进行预测,请使用Keras ’ predict '。下面,X[1015]包含一个2的图像。

image_of_two = X[1015]
display_digit(image_of_two)

prediction = model.predict(image_of_two.reshape(1,400))  # prediction

print(f" predicting a Two: \n{prediction}")
print(f" Largest Prediction index: {np.argmax(prediction)}")

在这里插入图片描述

 predicting a Two: 
[[ -8.45  -3.27   1.03  -2.2  -10.83  -9.65  -9.07  -2.18  -4.75  -6.29]]
 Largest Prediction index: 2

The largest output is prediction[2], indicating the predicted digit is a ‘2’. If the problem only requires a selection, that is sufficient. Use NumPy argmax to select it. If the problem requires a probability, a softmax is required:
最大的输出是prediction[2],表示预测的数字是“2”。如果问题只需要一个选择,那就足够了。使用NumPy argmax来选择它。如果问题需要一个概率,则需要一个softmax:

prediction_p = tf.nn.softmax(prediction)

print(f" predicting a Two. Probability vector: \n{prediction_p}")
print(f"Total of predictions: {np.sum(prediction_p):0.3f}")
 predicting a Two. Probability vector: 
[[6.92e-05 1.24e-02 9.12e-01 3.58e-02 6.41e-06 2.10e-05 3.74e-05 3.67e-02
  2.79e-03 6.01e-04]]
Total of predictions: 1.000

To return an integer representing the predicted target, you want the index of the largest probability. This is accomplished with the Numpy argmax function.
要返回一个表示预测目标的整数,您需要最大概率的索引。这是通过Numpy argmax函数完成的。

yhat = np.argmax(prediction_p)

print(f"np.argmax(prediction_p): {yhat}")
np.argmax(prediction_p): 2

Let’s compare the predictions vs the labels for a random sample of 64 digits. This takes a moment to run.
让我们比较64位随机样本的预测和标签。这需要一点时间来运行。

import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
# You do not need to modify anything in this cell

m, n = X.shape

fig, axes = plt.subplots(8,8, figsize=(5,5))
fig.tight_layout(pad=0.13,rect=[0, 0.03, 1, 0.91]) #[left, bottom, right, top]
widgvis(fig)
for i,ax in enumerate(axes.flat):
    # Select random indices
    random_index = np.random.randint(m)
    
    # Select rows corresponding to the random indices and
    # reshape the image
    X_random_reshaped = X[random_index].reshape((20,20)).T
    
    # Display the image
    ax.imshow(X_random_reshaped, cmap='gray')
    
    # Predict using the Neural Network
    prediction = model.predict(X[random_index].reshape(1,400))
    prediction_p = tf.nn.softmax(prediction)
    yhat = np.argmax(prediction_p)
    
    # Display the label above the image
    ax.set_title(f"{y[random_index,0]},{yhat}",fontsize=10)
    ax.set_axis_off()
fig.suptitle("Label, yhat", fontsize=14)
plt.show()

在这里插入图片描述

Let’s look at some of the errors.
让我们看看一些错误。

Note: increasing the number of training epochs can eliminate the errors on this data set.
注意:增加训练轮数可以消除这个数据集上的错误。

print( f"{display_errors(model,X,y)} errors out of {len(X)} images")
14 errors out of 5000 images

在这里插入图片描述

Congratulations!

You have successfully built and utilized a neural network to do multiclass classification.
你已经成功构建并利用了神经网络来进行多类分类。

Pytorch实现Minist(手写数字)数据集的分类

import torch
import torchvision
from torch.utils.tensorboard import SummaryWriter
from torch.utils.data import DataLoader , TensorDataset
from torch import nn

在开始搭建模型之前我们先了解两个包SummaryWritertorchvision.

SummaryWriter:官方解释:将条目直接写入 log_dir 中的事件文件以供 TensorBoard 使用。SummaryWriter 提供了一个高级 API,用于在给定目录中创建事件文件,并向其中添加摘要和事件。 该类异步更新文件内容。 这允许训练程序调用方法以直接从训练循环将数据添加到文件中,而不会减慢训练速度。

简单来说:用来记录训练过程中的数据,比如损失函数,准确率并将其可视化等。

torchvision:官方解释:torchvision 是一个用于构建计算机视觉模型和数据加载的库。它包括数据集,模型架构,数据转换等。torchvision

这里我们将使用torchvision下载Minist数据集,并使用torchvision.transforms对数据进行预处理。在之前的实验中我们是通过TensorDDataset来自己构建数据集.

# 定义训练设备
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

#下载数据集
train_dataset = torchvision.datasets.MNIST(root='./data',train=True,transform=torchvision.transforms.ToTensor(),
                                           download=True)
test_dataset = torchvision.datasets.MNIST(root='./data',train=False,transform=torchvision.transforms.ToTensor(),
                                            download=True)

#数据集长度
train_data_size = len(train_dataset)
test_data_size = len(test_dataset)
  • train=True将数据集作为训练集,False将数据集作为测试集。
  • transform将数据转化为Tensor类型。

接下来让我们来看一下数据集的形状

print(train_dataset.data.shape)
print(train_dataset.targets.shape)
torch.Size([60000, 28, 28])
torch.Size([60000])

从输出我们可以知道Minist数据集的训练集有6000个样本,每个样本有28*28个像素点,每个像素点的值在0-1之间。下面我们利用DataLoader加载数据集。

#载入训练集和测试集。
train_dataloader_in = DataLoader(train_dataset,64)
test_dataloader = DataLoader(test_dataset,64)
print(train_dataloader_in.dataset.data.shape)
print(train_dataloader_in.dataset.targets.shape)

torch.Size([60000, 28, 28])
torch.Size([60000])

我们使用的数据集与上面的数据集不一样它是60000张28像素*28像素的图片,在导入Minist数据集中到这一步数据集的搭建就已经完成了。因为我们模型的输入要求的是2维的数据集所以我们在下面将reshape数据集的形状成(-1, 28 * 28) 。这里的-1会根据数据自动计算。

X = train_dataloader_in.dataset.data.reshape(-1,28*28)
y = train_dataloader_in.dataset.targets
Xt = test_dataloader.dataset.data.reshape(-1,28*28)
yt = test_dataloader.dataset.targets
#更改数据类型,避免喂入神经网络的时候报错
X = torch.tensor(X,dtype=torch.float32)
y = torch.tensor(y,dtype=torch.long)
Xt = torch.tensor(Xt,dtype=torch.float32)
yt = torch.tensor(yt,dtype=torch.long)
C:\Users\10766\AppData\Local\Temp\ipykernel_20736\1033584978.py:7: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  X = torch.tensor(X,dtype=torch.float32)
C:\Users\10766\AppData\Local\Temp\ipykernel_20736\1033584978.py:8: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  y = torch.tensor(y,dtype=torch.long)
C:\Users\10766\AppData\Local\Temp\ipykernel_20736\1033584978.py:9: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  Xt = torch.tensor(Xt,dtype=torch.float32)
C:\Users\10766\AppData\Local\Temp\ipykernel_20736\1033584978.py:10: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  yt = torch.tensor(yt,dtype=torch.long)
print(X.shape)
print(y.shape)
torch.Size([60000, 784])
torch.Size([60000])

使用TensorDatasetDataLoader,来构建数据集。之前提到过:
TensorDataset:对数据进行打包整合(数据格式为Tensor),与python中zip方法类似,
DataLoader:用来分批次向模型中传入数据

train_dataset_re = TensorDataset(X,y)
train_dataloader = DataLoader(train_dataset_re,batch_size=64,shuffle=False)

test_dataloader_re = TensorDataset(Xt,yt)
test_dataloader = DataLoader(test_dataloader_re,batch_size=64,shuffle=False)

构建网络模型
注意:tensorboard在终端使用,
tensorboard --logdir=path

class MinistNet(nn.Module):
    def __init__(self):
        super(MinistNet,self).__init__()
        self.model = nn.Sequential(
            nn.Linear(784,25),
            nn.ReLU(),
            nn.Linear(25,15),
            nn.ReLU(),
            nn.Linear(15,10)
        )

    def forward(self,x):
        x = self.model(x)
        return x

神经网络模型搭建完成,请注意代码中对SummaryWriter的使用。将在训练开始,梯度更新后以及测试完成后。

MinistNet = MinistNet()
MinistNet = MinistNet.to(device)

#损失函数
loss_fn = nn.CrossEntropyLoss()
loss_fn = loss_fn.to(device)

#优化器
learning_rate = 1e-3
optimizer = torch.optim.Adam(MinistNet.parameters(),lr=learning_rate)

#训练次数
total_train_step = 0
total_test_step = 0

#训练轮数
epoch = 10

#添加tensorboard
writer = SummaryWriter("./logs_train")

for i in range(epoch):
    print("--------第{}轮训练开始--------".format(i+1))

    # 训练步骤开始
    MinistNet.train()    
    for data in train_dataloader:
        imgs,targets = data
        imgs = imgs.to(device)
        targets = targets.to(device)
        outputs = MinistNet(imgs)
        loss = loss_fn(outputs,targets)

        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

        total_train_step += 1
        if total_train_step % 100 == 0:
            print("训练次数:{},loss:{}".format(total_train_step,loss.item()))
            writer.add_scalar("train_loss",loss.item(),total_train_step)
            writer.flush()

    #测试步骤开始
    MinistNet.eval()
    #测试损失和准确率
    total_test_loss = 0
    total_accuracy = 0
    with torch.no_grad():
    	#与训练步骤一样只是数据集变为测试集
        for data in test_dataloader:
            imgs,targets = data
            imgs = imgs.to(device)
            targets = targets.to(device)
            outputs = MinistNet(imgs)
            #计算损失
            loss = loss_fn(outputs,targets)
            #计算总损失
            total_test_loss += loss.item()
            #准确次数
            accuracy = (outputs.argmax(1) == targets).sum()
            total_accuracy += accuracy

    print("整体测试集上的Loss:{}".format(total_test_loss))
    print("整体测试集上的正确率:{}".format(total_accuracy/test_data_size))
    writer.add_scalar("test_loss",total_test_loss,total_train_step)
    writer.add_scalar("test_accuracy",total_accuracy/test_data_size,total_train_step)
    total_test_step += 1

    if i == 5:
        torch.save(MinistNet.state_dict(),"model_dict{}.pth".format(i+1))
        print("模型已保存")
        
#千万别忘记
writer.close() 
--------第1轮训练开始--------
训练次数:100,loss:1.4312055110931396
训练次数:200,loss:1.2895567417144775
训练次数:300,loss:1.1959021091461182
...
训练次数:8900,loss:0.1826225370168686
训练次数:9000,loss:0.06963329017162323
训练次数:9100,loss:0.07875239849090576
训练次数:9200,loss:0.10090328007936478
训练次数:9300,loss:0.22011485695838928
整体测试集上的Loss:42.02128033316694
整体测试集上的正确率:0.932699978351593

在这里插入图片描述
在这里插入图片描述
上述图就是tensorboard中记录的损失以及准确率的改变。通过图像我们可以判断模型收敛情况。上述图中模型的损失在不断减小,准确率在不断提高,模型良好。接下来我们来验证一下。

# 测试
X_test = X[0]
print(X_test)
print(y[0]) 
tensor([  0.,   0.,   0.,   0.,   0.,   0.,   0.,   0.,   0.,   0.,   
        ...
          0.,   0.,   0.,   0.])
tensor(5)
model = MinistNet()
model.load_state_dict(torch.load("model_dict6.pth",map_location=torch.device("cpu")))
model.eval()


prediction = model(X[0].reshape(1,-1))
print("预测的值为:",prediction)
print("预测类别为:",prediction.argmax(dim=1))
print("真实类别是:",y[0])
预测的值为: tensor([[17.8051,  2.4538, 11.3420, 31.7410,  3.3992, 40.8306,  5.0347, 23.3100,
          0.6284, 26.4911]], grad_fn=<AddmmBackward0>)
预测类别为: tensor([5])
真实类别是: tensor(5)

结果与实际一直,nice。

恭喜,你使用Pytorch实现了Minist数据集手写字数字分类的问题!

有更好的实现方法以及更正确、简洁的解释,欢迎在评论区讨论。希望对大家的学习有所帮助!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/424178.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

服务器有几种http强制跳转https设置方法

目前为站点安装SSL证书开启https加密访问已经是件很简单的事了&#xff0c;主要是免费SSL证书的普及&#xff0c;为大家提供了很好的基础。 Apache环境下如何http强制跳转https访问。Nginx环境下一般是通过修改“你的域名.conf”文件来实现的。 而Apache环境下通过修改.htacces…

类与对象(一)

目录 1 什么是面向过程和面向对象 1.1举例 2类的引入 3类的定义 3.1类的两种定义方式&#xff1a; 4.类的访问限定符及封装 4.1访问限定符 4.1.1为什么要有访问限定符 4.1.2有哪些访问限定符呢&#xff1f; 4.1.3简单举例理解 4.1.4C中的class与struct的区别(面试问题…

Tomcat基础及与Nginx实现动静分离,搭建高效稳定的个人博客系统

目录 引言 一、TOMCAT基础功能 &#xff08;一&#xff09;自动解压war包 &#xff08;二&#xff09;状态页 1.登录状态页 2.远程登录 &#xff08;三&#xff09;服务管理界面 &#xff08;四&#xff09;Host虚拟主机 1.设置虚拟主机 2.建立站点目录与文件 二、实…

python 使用curl_cffi 绕过jax3指纹-Cloudflare 5s盾

现在越来越多的网站已经能够通过JA3或者其他指纹信息&#xff0c;来识别你是不是爬虫了。传统的方式比如换UA&#xff0c;加代理是没有任何意义了&#xff0c;所以这个时候我们就需要使用到curl_cffi 了。 1.TLS 指纹是啥&#xff1f; 在绝大多数的网站都已经使用了 HTTPS&am…

Java项目:32 基于springboot的课程作业管理系统(含源码数据库+文档免费送)

作者主页&#xff1a;舒克日记 简介&#xff1a;Java领域优质创作者、Java项目、学习资料、技术互助 文中获取源码 项目介绍 管理员&#xff1a;首页、个人中心、公告信息管理、班级管理、学生管理、教师管理、课程类型管理、课程信息管理、学生选课管理、作业布置管理、作业提…

sprintboot集成flink快速入门demo

一、flink介绍 Flink是一个批处理和流处理结合的统一计算框架&#xff0c;其核心是一个提供了数据分发以及并行化计算的流数据处理引擎。它的最大亮点是流处理&#xff0c;是业界最顶级的开源流处理引擎。Flink最适合的应用场景是低时延的数据处理&#xff08;Data Processing&…

⭐每天一道leetcode:13.罗马数字转整数(简单)

⭐今日份题目 罗马数字包含以下七种字符: I&#xff0c; V&#xff0c; X&#xff0c; L&#xff0c;C&#xff0c;D 和 M。 字符 数值 I 1 V 5 X 10 L 50 C 100 D 500 M 100…

MATLAB知识点:条件判断switch-case-otherwise-end语句

​讲解视频&#xff1a;可以在bilibili搜索《MATLAB教程新手入门篇——数学建模清风主讲》。​ MATLAB教程新手入门篇&#xff08;数学建模清风主讲&#xff0c;适合零基础同学观看&#xff09;_哔哩哔哩_bilibili 节选自​第4章&#xff1a;MATLAB程序流程控制 switch翻译成…

使用C语言 打印出所有的水仙花数

水仙花数 一.什么是水仙花数二.如何获取一个数的每一位数三.如何计算一个数有几位数四.计算出所有的水仙花数 一.什么是水仙花数 水仙花数的定义&#xff1a;“水仙花数”是指一个n位数&#xff0c;其各位数字的n次方之和确好等于该数本身&#xff0c;如:153&#xff1d;1^ 3&a…

寻找峰值[中等]

优质博文IT-BLOG-CN 一、题目 峰值元素是指其值严格大于左右相邻值的元素。给你一个整数数组nums&#xff0c;找到峰值元素并返回其索引。数组可能包含多个峰值&#xff0c;在这种情况下&#xff0c;返回 任何一个峰值 所在位置即可。 你可以假设nums[-1] nums[n] -∞。 你…

【Python】批量读取文件夹中的excel文件

示例展示 代码 import os import pandas as pd folder_path r"C:\Users\admin\Desktop\excelfile" extension"xlsx" files [file for file in os.listdir(folder_path) if file.endswith(. extension)] for file in files:filepath os.path.join(folde…

ChatGPT支持下的PyTorch机器学习与深度学习技术应用

近年来&#xff0c;随着AlphaGo、无人驾驶汽车、医学影像智慧辅助诊疗、ImageNet竞赛等热点事件的发生&#xff0c;人工智能迎来了新一轮的发展浪潮。尤其是深度学习技术&#xff0c;在许多行业都取得了颠覆性的成果。另外&#xff0c;近年来&#xff0c;Pytorch深度学习框架受…

运用qsort函数进行快排并使用C语言模拟qsort

qsort 函数的使用 首先qsort函数是使用快速排序算法来进行排序的&#xff0c;下面我们打开官网来查看qsort是如何使用的。 这里有四个参数&#xff0c;首先base 是至待排序的数组的首元素的地址&#xff0c;num 是值这个数组的元素个数&#xff0c;size 是指每个元素的大小&am…

数字化转型导师坚鹏:证券公司数字化转型战略、方法与案例

证券公司数字化转型战略、方法与案例 课程背景&#xff1a; 数字化转型背景下&#xff0c;很多机构存在以下问题&#xff1a; 不清楚证券公司数字化转型的发展战略&#xff1f; 不知道证券公司数字化转型的核心方法&#xff1f; 不知道证券公司数字化转型的成功案例&am…

第四十八回 解珍解宝双越狱 孙立孙新大劫牢-Python模块和包概念与使用

吴用对宋江说&#xff0c;有个人&#xff0c;他是石勇的关系&#xff0c;与祝家庄的峦廷玉关系好&#xff0c;还是杨林、邓飞的老相识&#xff0c;他有一计.... 原来在宋江攻打祝家庄的时间段&#xff0c;山东海边登州也发生了一件事。登州山下有一家猎户&#xff0c;弟兄两个…

Linux下进程相关概念详解

目录 一、操作系统 概念 设计操作系统的目的 定位 如何理解“管理” 系统调用和库函数概念 二、进程 概念 描述进程—PCB&#xff08;process control block&#xff09; 查看进程 进程状态 进程优先级 三、其它的进程概念 一、操作系统 概念 任何计算机系统都包…

HPE ProLiant MicroServer Gen8更换坏硬盘(RAID 1+0)

HPE ProLiant MicroServer Gen8今天硬盘告警&#xff0c;坏了一块硬盘&#xff08;估计还是由于上次突然断电导致的&#xff09;&#xff0c;关机&#xff0c;拆下坏硬盘&#xff0c;更换新硬盘&#xff0c;开机后按了一次F1键&#xff0c;系统继续启动并正常使用&#xff0c;同…

VueCli的安装与卸载

文章目录 一.Node安装包的报读网盘提取码二、Vue脚手架Cli三、Vue-CLI使用步骤(自定义安装)1.转换路径并创建项目2.创建步骤的解释(保姆级)3.等待vue项目自己创建好(保姆级) 四、通过npm对vue的安装与卸载 一.Node安装包的报读网盘提取码 下面的链接为地址: Node的百度网盘提取…

面试准备:排序算法大汇总 C++

排序算法总结 直接插入排序 取出未排序部分的第一个元素&#xff0c;与已排序的部分从后往前比较&#xff0c;找到合适的位置。将大于它的已排序的元素向后移动&#xff0c;将该元素插入到合适的位置。 //1. 直接插入排序 void InsertionSort(vector<int>& nums){f…

#WEB前端(HTML属性)

1.实验&#xff1a;a,img 2.IDE&#xff1a;VSCODE 3.记录&#xff1a; a: href插入超链接 默认情况下在本窗口打开链接, target可以设置打开的窗口,parent在父窗口打开&#xff0c;blank新开串口打开,top在顶层串口打开,self为默认在本窗口打开 img: 插入图片 可以插…