tensorflow案例4--人脸识别(损失函数选取,调用VGG16模型以及改进写法)

  • 🍨 本文为🔗365天深度学习训练营 中的学习记录博客
  • 🍖 原作者:K同学啊

前言

  • 这个模型结构算上之前的pytorch版本的,算是花了不少时间,但是效果一直没有达到理想情况,主要是验证集和训练集准确率差距过大;
  • 在猜想会不会是模型复杂性不够的问题,但是如果继续叠加,又会出现模型退化结果,解决方法,我感觉可以试一下ResNet模型,后面再更新吧;
  • 这一次对*VGG16的修改主要修改了3个方面,具体情况如下讲解;
  • 欢迎收藏 + 关注,本人将会持续更新。

1、知识讲解与API积累

1、模型改进与简介

VGG16模型

VGG16模型是一个很基础的模型,由13层卷积,3层全连接层构成,图示如下:

在这里插入图片描述

本次实验VGG16模型修改

  • 冻结前13层卷积,只修改全连接
  • 在全连接层添加BN层、全局平均池化层,起到降维作用,因为VGG16的计算量很大
  • 全连接层中添加Dropout层
  • 修改后代码:
# 导入官方VGG16模型
vgg16_model = tf.keras.applications.VGG16(include_top=False, weights='imagenet', input_shape=(256, 256, 3))

# 冻结卷积权重
for layer in vgg16_model.layers:
    layer.trainble = False
    
# 获取卷积层输出
x = vgg16_model.output

# 添加BN层
x = layers.BatchNormalization()(x)

# 添加平均池化,降低计算量
x = layers.GlobalAveragePooling2D()(x)

# 添加全连接层和Dropout
x = layers.Dense(1024, activation='relu')(x)
x = layers.Dropout(0.5)(x)
x = layers.Dense(512, activation='relu')(x)
x = layers.Dropout(0.5)(x)

predict = layers.Dense(len(classnames))(x)

# 创建模型
model = models.Model(inputs=vgg16_model.input, outputs=predict)

model.summary()

结果

最好结果loss: 0.0505 - accuracy: 0.9847 - val_loss: 3.7758 - val_accuracy: 0.4750,个人感觉想要继续提升精度,最简单方法,是结合ResNet网络, 这个后面我再试一下.

2、API积累

Loss函数

损失函数Loss详解(结合加载数据的时候,label_mode参数进行相应选取):

1. binary_crossentropy(对数损失函数)

sigmoid 相对应的损失函数,针对于二分类问题。

2. categorical_crossentropy(多分类的对数损失函数)

softmax 相对应的损失函数,如果是one-hot编码,则使用 categorical_crossentropy

Tensorflow加载VGG16模型

1. 加载包含顶部全连接层的 VGG16 模型
from tensorflow.keras.applications import VGG16

# 加载包含顶部全连接层的 VGG16 模型,使用 ImageNet 预训练权重
model = VGG16(include_top=True, weights='imagenet', input_shape=(224, 224, 3))

model.summary()
2. 加载不包含顶部全连接层的 VGG16 模型
from tensorflow.keras.applications import VGG16
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.keras.models import Model

# 加载不包含顶部全连接层的 VGG16 模型,使用 ImageNet 预训练权重
base_model = VGG16(include_top=False, weights='imagenet', input_shape=(224, 224, 3))

# 冻结卷积基的权重(可选)
for layer in base_model.layers:
    layer.trainable = False

# 获取卷积基的输出
x = base_model.output

# 添加新的全连接层,*******  就这一步结合实际场景进行修改 ****
x = Flatten()(x)
x = Dense(256, activation='relu')(x)
predictions = Dense(2, activation='softmax')(x)  # 2 个输出类别

# 创建新的模型
model = Model(inputs=base_model.input, outputs=predictions)

model.summary()
3. 使用自定义输入张量
from tensorflow.keras.applications import VGG16
from tensorflow.keras.layers import Input

# 定义输入张量
input_tensor = Input(shape=(224, 224, 3))

# 加载 VGG16 模型,使用自定义输入张量
model = VGG16(include_top=True, weights='imagenet', input_tensor=input_tensor)

model.summary()
4. 不加载预训练权重
from tensorflow.keras.applications import VGG16

# 加载 VGG16 模型,不加载预训练权重
model = VGG16(include_top=True, weights=None, input_shape=(224, 224, 3))

model.summary()

2、人脸识别求解

1、数据处理

1、导入库

import tensorflow as tf 
from tensorflow.keras import datasets, models, layers
import numpy as np 

# 获取所有GPU
gpus = tf.config.list_physical_devices("GPU")

if gpus:
    gpu0 = gpus[0]   # 有多块,取一块
    tf.config.experimental.set_memory_growth(gpu0, True)   # 设置显存空间
    tf.config.set_visible_devices([gpu0], "GPU")  # 设置第一块

gpus  # 输出GPU
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]

2、查看文件目录

在数据文件夹中,不同人的图像都存储在单独的文件夹中

import os, PIL, pathlib

data_dir = './data/'
data_dir = pathlib.Path(data_dir)

# 获取所有改目文件夹下所有文件夹名字
classnames = os.listdir(data_dir)
classnames
['Angelina Jolie',
 'Brad Pitt',
 'Denzel Washington',
 'Hugh Jackman',
 'Jennifer Lawrence',
 'Johnny Depp',
 'Kate Winslet',
 'Leonardo DiCaprio',
 'Megan Fox',
 'Natalie Portman',
 'Nicole Kidman',
 'Robert Downey Jr',
 'Sandra Bullock',
 'Scarlett Johansson',
 'Tom Cruise',
 'Tom Hanks',
 'Will Smith']

3、数据划分

batch_size = 32

train_ds = tf.keras.preprocessing.image_dataset_from_directory(
    './data/',
    batch_size=batch_size,
    shuffle=True,
    validation_split=0.2,   # 验证集 0.1,训练集 0.9
    subset='training',
    seed=42,
    label_mode='categorical',   # 使用独热编码编码数据分类
    image_size=(256, 256)
)

val_ds = tf.keras.preprocessing.image_dataset_from_directory(
    './data/',
    batch_size=batch_size,
    shuffle=True,
    validation_split=0.2,
    seed=42,
    subset='validation',    
    image_size=(256, 256),
    label_mode='categorical'  # 使用独热编码对数据进行分类
)
Found 1800 files belonging to 17 classes.
Using 1440 files for training.
Found 1800 files belonging to 17 classes.
Using 360 files for validation.
# 输出数据维度
for image, label in train_ds.take(1):
    print(image.shape)
    print(label.shape)
(32, 256, 256, 3)
(32, 17)

4、数据展示

# 随机展示一批数据
import matplotlib.pyplot as plt 

plt.figure(figsize=(20,10))
for images, labels in train_ds.take(1):
    for i in range(20):
        plt.subplot(5, 10, i + 1)
        plt.imshow(images[i].numpy().astype("uint8"))
        plt.title(classnames[np.argmax(labels[i], axis=0)])    
        plt.axis('off')
        
plt.show()


在这里插入图片描述

2、构建VGG16模型

VGG16修改:

  • 冻结前13层卷积,只修改全连接
  • 在全连接层添加BN层、全局平均池化层,起到降维作用,因为VGG16的计算量很大
  • 全连接层中添加Dropout层
# 导入官方VGG16模型
vgg16_model = tf.keras.applications.VGG16(include_top=False, weights='imagenet', input_shape=(256, 256, 3))

# 冻结卷积权重
for layer in vgg16_model.layers:
    layer.trainble = False
    
# 获取卷积层输出
x = vgg16_model.output

# 添加BN层
x = layers.BatchNormalization()(x)

# 添加平均池化,降低计算量
x = layers.GlobalAveragePooling2D()(x)

# 添加全连接层和Dropout
x = layers.Dense(1024, activation='relu')(x)
x = layers.Dropout(0.5)(x)
x = layers.Dense(512, activation='relu')(x)
x = layers.Dropout(0.5)(x)

predict = layers.Dense(len(classnames))(x)

# 创建模型
model = models.Model(inputs=vgg16_model.input, outputs=predict)

model.summary()
Model: "model"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 input_1 (InputLayer)        [(None, 256, 256, 3)]     0         
                                                                 
 block1_conv1 (Conv2D)       (None, 256, 256, 64)      1792      
                                                                 
 block1_conv2 (Conv2D)       (None, 256, 256, 64)      36928     
                                                                 
 block1_pool (MaxPooling2D)  (None, 128, 128, 64)      0         
                                                                 
 block2_conv1 (Conv2D)       (None, 128, 128, 128)     73856     
                                                                 
 block2_conv2 (Conv2D)       (None, 128, 128, 128)     147584    
                                                                 
 block2_pool (MaxPooling2D)  (None, 64, 64, 128)       0         
                                                                 
 block3_conv1 (Conv2D)       (None, 64, 64, 256)       295168    
                                                                 
 block3_conv2 (Conv2D)       (None, 64, 64, 256)       590080    
                                                                 
 block3_conv3 (Conv2D)       (None, 64, 64, 256)       590080    
                                                                 
 block3_pool (MaxPooling2D)  (None, 32, 32, 256)       0         
                                                                 
 block4_conv1 (Conv2D)       (None, 32, 32, 512)       1180160   
                                                                 
 block4_conv2 (Conv2D)       (None, 32, 32, 512)       2359808   
                                                                 
 block4_conv3 (Conv2D)       (None, 32, 32, 512)       2359808   
                                                                 
 block4_pool (MaxPooling2D)  (None, 16, 16, 512)       0         
                                                                 
 block5_conv1 (Conv2D)       (None, 16, 16, 512)       2359808   
                                                                 
 block5_conv2 (Conv2D)       (None, 16, 16, 512)       2359808   
                                                                 
 block5_conv3 (Conv2D)       (None, 16, 16, 512)       2359808   
                                                                 
 block5_pool (MaxPooling2D)  (None, 8, 8, 512)         0         
                                                                 
 batch_normalization (BatchN  (None, 8, 8, 512)        2048      
 ormalization)                                                   
                                                                 
 global_average_pooling2d (G  (None, 512)              0         
 lobalAveragePooling2D)                                          
                                                                 
 dense (Dense)               (None, 1024)              525312    
                                                                 
 dropout (Dropout)           (None, 1024)              0         
                                                                 
 dense_1 (Dense)             (None, 512)               524800    
                                                                 
 dropout_1 (Dropout)         (None, 512)               0         
                                                                 
 dense_2 (Dense)             (None, 17)                8721      
                                                                 
=================================================================
Total params: 15,775,569
Trainable params: 15,774,545
Non-trainable params: 1,024
_________________________________________________________________

3、模型训练

1、设置超参数

# 初始化学习率
learing_rate = 1e-3

# 设置动态学习率
lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(
    learing_rate,
    decay_steps=60,   # 每60步衰减一次
    decay_rate=0.96,   # 原来的0.96
    staircase=True
)

# 定义优化器
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
# 设置超参数
model.compile(optimizer=optimizer,
              loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])

2、模型正式训练

from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping

epoches = 100

# 训练模型中最佳模型
checkpointer = ModelCheckpoint(
    'best_model.h5',
    monitor='val_accuracy',  # 被检测参数
    verbose=1,
    save_best_only=True,
    save_weights_only=True
)

# 设置早停
earlystopper = EarlyStopping(
    monitor='val_accuracy',
    verbose=1,  # 信息模型
    patience=20,  
    min_delta=0.01,  # 20次没有提示0.01,则停止
)

history = model.fit(
    train_ds, 
    validation_data=val_ds,
    epochs=epoches,
    callbacks=[checkpointer, earlystopper]   # 设置回调函数
)
Epoch 1/100
2024-11-01 19:31:48.093783: I tensorflow/stream_executor/cuda/cuda_dnn.cc:384] Loaded cuDNN version 8101
2024-11-01 19:31:50.361608: I tensorflow/stream_executor/cuda/cuda_blas.cc:1786] TensorFloat-32 will be used for the matrix multiplication. This will only be logged once.
45/45 [==============================] - ETA: 0s - loss: 2.8548 - accuracy: 0.0826
Epoch 1: val_accuracy improved from -inf to 0.13056, saving model to best_model.h5
45/45 [==============================] - 14s 205ms/step - loss: 2.8548 - accuracy: 0.0826 - val_loss: 8.0561 - val_accuracy: 0.1306
Epoch 2/100
45/45 [==============================] - ETA: 0s - loss: 2.7271 - accuracy: 0.1181
Epoch 2: val_accuracy did not improve from 0.13056
45/45 [==============================] - 6s 142ms/step - loss: 2.7271 - accuracy: 0.1181 - val_loss: 3.7047 - val_accuracy: 0.0639
Epoch 3/100
45/45 [==============================] - ETA: 0s - loss: 2.6583 - accuracy: 0.1354
Epoch 3: val_accuracy did not improve from 0.13056
45/45 [==============================] - 7s 144ms/step - loss: 2.6583 - accuracy: 0.1354 - val_loss: 8.0687 - val_accuracy: 0.0806
Epoch 4/100
45/45 [==============================] - ETA: 0s - loss: 2.5833 - accuracy: 0.1444
Epoch 4: val_accuracy did not improve from 0.13056
45/45 [==============================] - 6s 143ms/step - loss: 2.5833 - accuracy: 0.1444 - val_loss: 4.7184 - val_accuracy: 0.1000
Epoch 5/100
45/45 [==============================] - ETA: 0s - loss: 2.5115 - accuracy: 0.1576
Epoch 5: val_accuracy did not improve from 0.13056
45/45 [==============================] - 6s 142ms/step - loss: 2.5115 - accuracy: 0.1576 - val_loss: 61.5911 - val_accuracy: 0.0639
Epoch 6/100
45/45 [==============================] - ETA: 0s - loss: 2.4402 - accuracy: 0.1674
Epoch 6: val_accuracy did not improve from 0.13056
45/45 [==============================] - 6s 142ms/step - loss: 2.4402 - accuracy: 0.1674 - val_loss: 4.6790 - val_accuracy: 0.0944
Epoch 7/100
45/45 [==============================] - ETA: 0s - loss: 2.3911 - accuracy: 0.1951
Epoch 7: val_accuracy did not improve from 0.13056
45/45 [==============================] - 6s 143ms/step - loss: 2.3911 - accuracy: 0.1951 - val_loss: 2.7717 - val_accuracy: 0.1028
Epoch 8/100
45/45 [==============================] - ETA: 0s - loss: 2.3331 - accuracy: 0.1931
Epoch 8: val_accuracy did not improve from 0.13056
45/45 [==============================] - 6s 144ms/step - loss: 2.3331 - accuracy: 0.1931 - val_loss: 8.2605 - val_accuracy: 0.0639
Epoch 9/100
45/45 [==============================] - ETA: 0s - loss: 2.2922 - accuracy: 0.2021
Epoch 9: val_accuracy did not improve from 0.13056
45/45 [==============================] - 6s 143ms/step - loss: 2.2922 - accuracy: 0.2021 - val_loss: 51.5976 - val_accuracy: 0.0306
Epoch 10/100
45/45 [==============================] - ETA: 0s - loss: 2.2182 - accuracy: 0.2313
Epoch 10: val_accuracy did not improve from 0.13056
45/45 [==============================] - 6s 143ms/step - loss: 2.2182 - accuracy: 0.2313 - val_loss: 4.3942 - val_accuracy: 0.0611
Epoch 11/100
45/45 [==============================] - ETA: 0s - loss: 2.2049 - accuracy: 0.2361
Epoch 11: val_accuracy improved from 0.13056 to 0.17778, saving model to best_model.h5
45/45 [==============================] - 7s 145ms/step - loss: 2.2049 - accuracy: 0.2361 - val_loss: 2.4072 - val_accuracy: 0.1778
Epoch 12/100
45/45 [==============================] - ETA: 0s - loss: 2.1242 - accuracy: 0.2576
Epoch 12: val_accuracy improved from 0.17778 to 0.18056, saving model to best_model.h5
45/45 [==============================] - 7s 145ms/step - loss: 2.1242 - accuracy: 0.2576 - val_loss: 2.6218 - val_accuracy: 0.1806
Epoch 13/100
45/45 [==============================] - ETA: 0s - loss: 2.0634 - accuracy: 0.2639
Epoch 13: val_accuracy did not improve from 0.18056
45/45 [==============================] - 6s 142ms/step - loss: 2.0634 - accuracy: 0.2639 - val_loss: 14.2102 - val_accuracy: 0.1556
Epoch 14/100
45/45 [==============================] - ETA: 0s - loss: 2.0379 - accuracy: 0.2861
Epoch 14: val_accuracy did not improve from 0.18056
45/45 [==============================] - 6s 143ms/step - loss: 2.0379 - accuracy: 0.2861 - val_loss: 931.4739 - val_accuracy: 0.1556
Epoch 15/100
45/45 [==============================] - ETA: 0s - loss: 1.9782 - accuracy: 0.3063
Epoch 15: val_accuracy improved from 0.18056 to 0.21667, saving model to best_model.h5
45/45 [==============================] - 7s 144ms/step - loss: 1.9782 - accuracy: 0.3063 - val_loss: 2.3025 - val_accuracy: 0.2167
Epoch 16/100
45/45 [==============================] - ETA: 0s - loss: 1.9299 - accuracy: 0.3306
Epoch 16: val_accuracy did not improve from 0.21667
45/45 [==============================] - 6s 143ms/step - loss: 1.9299 - accuracy: 0.3306 - val_loss: 2.2587 - val_accuracy: 0.2000
Epoch 17/100
45/45 [==============================] - ETA: 0s - loss: 1.8289 - accuracy: 0.3590
Epoch 17: val_accuracy did not improve from 0.21667
45/45 [==============================] - 6s 143ms/step - loss: 1.8289 - accuracy: 0.3590 - val_loss: 2.5047 - val_accuracy: 0.1722
Epoch 18/100
45/45 [==============================] - ETA: 0s - loss: 1.7912 - accuracy: 0.3694
Epoch 18: val_accuracy did not improve from 0.21667
45/45 [==============================] - 6s 142ms/step - loss: 1.7912 - accuracy: 0.3694 - val_loss: 3.1102 - val_accuracy: 0.1722
Epoch 19/100
45/45 [==============================] - ETA: 0s - loss: 1.7762 - accuracy: 0.3764
Epoch 19: val_accuracy did not improve from 0.21667
45/45 [==============================] - 6s 142ms/step - loss: 1.7762 - accuracy: 0.3764 - val_loss: 2.7225 - val_accuracy: 0.2083
Epoch 20/100
45/45 [==============================] - ETA: 0s - loss: 1.7182 - accuracy: 0.3979
Epoch 20: val_accuracy did not improve from 0.21667
45/45 [==============================] - 6s 143ms/step - loss: 1.7182 - accuracy: 0.3979 - val_loss: 3.4486 - val_accuracy: 0.1528
Epoch 21/100
45/45 [==============================] - ETA: 0s - loss: 1.6341 - accuracy: 0.4208
Epoch 21: val_accuracy did not improve from 0.21667
45/45 [==============================] - 6s 143ms/step - loss: 1.6341 - accuracy: 0.4208 - val_loss: 2.7709 - val_accuracy: 0.1806
Epoch 22/100
45/45 [==============================] - ETA: 0s - loss: 1.5667 - accuracy: 0.4486
Epoch 22: val_accuracy did not improve from 0.21667
45/45 [==============================] - 6s 144ms/step - loss: 1.5667 - accuracy: 0.4486 - val_loss: 4.2764 - val_accuracy: 0.1583
Epoch 23/100
45/45 [==============================] - ETA: 0s - loss: 1.4579 - accuracy: 0.4875
Epoch 23: val_accuracy improved from 0.21667 to 0.26111, saving model to best_model.h5
45/45 [==============================] - 7s 147ms/step - loss: 1.4579 - accuracy: 0.4875 - val_loss: 32579.7422 - val_accuracy: 0.2611
Epoch 24/100
45/45 [==============================] - ETA: 0s - loss: 1.4373 - accuracy: 0.4854
Epoch 24: val_accuracy did not improve from 0.26111
45/45 [==============================] - 7s 145ms/step - loss: 1.4373 - accuracy: 0.4854 - val_loss: 8038.8555 - val_accuracy: 0.1972
Epoch 25/100
45/45 [==============================] - ETA: 0s - loss: 1.3630 - accuracy: 0.5139
Epoch 25: val_accuracy did not improve from 0.26111
45/45 [==============================] - 7s 145ms/step - loss: 1.3630 - accuracy: 0.5139 - val_loss: 2.3408 - val_accuracy: 0.2528
Epoch 26/100
45/45 [==============================] - ETA: 0s - loss: 1.3181 - accuracy: 0.5375
Epoch 26: val_accuracy did not improve from 0.26111
45/45 [==============================] - 7s 144ms/step - loss: 1.3181 - accuracy: 0.5375 - val_loss: 2.1877 - val_accuracy: 0.2500
Epoch 27/100
45/45 [==============================] - ETA: 0s - loss: 1.2544 - accuracy: 0.5583
Epoch 27: val_accuracy did not improve from 0.26111
45/45 [==============================] - 7s 144ms/step - loss: 1.2544 - accuracy: 0.5583 - val_loss: 2.6184 - val_accuracy: 0.1861
Epoch 28/100
45/45 [==============================] - ETA: 0s - loss: 1.1877 - accuracy: 0.5813
Epoch 28: val_accuracy did not improve from 0.26111
45/45 [==============================] - 6s 144ms/step - loss: 1.1877 - accuracy: 0.5813 - val_loss: 3.0485 - val_accuracy: 0.2500
Epoch 29/100
45/45 [==============================] - ETA: 0s - loss: 1.0968 - accuracy: 0.6132
Epoch 29: val_accuracy did not improve from 0.26111
45/45 [==============================] - 6s 143ms/step - loss: 1.0968 - accuracy: 0.6132 - val_loss: 61754.2734 - val_accuracy: 0.1917
Epoch 30/100
45/45 [==============================] - ETA: 0s - loss: 1.0537 - accuracy: 0.6424
Epoch 30: val_accuracy improved from 0.26111 to 0.26667, saving model to best_model.h5
45/45 [==============================] - 7s 148ms/step - loss: 1.0537 - accuracy: 0.6424 - val_loss: 2.3469 - val_accuracy: 0.2667
Epoch 31/100
45/45 [==============================] - ETA: 0s - loss: 1.0427 - accuracy: 0.6306
Epoch 31: val_accuracy did not improve from 0.26667
45/45 [==============================] - 6s 143ms/step - loss: 1.0427 - accuracy: 0.6306 - val_loss: 3.4498 - val_accuracy: 0.2250
Epoch 32/100
45/45 [==============================] - ETA: 0s - loss: 1.0697 - accuracy: 0.6403
Epoch 32: val_accuracy improved from 0.26667 to 0.37222, saving model to best_model.h5
45/45 [==============================] - 7s 146ms/step - loss: 1.0697 - accuracy: 0.6403 - val_loss: 2.8960 - val_accuracy: 0.3722
Epoch 33/100
45/45 [==============================] - ETA: 0s - loss: 0.9062 - accuracy: 0.6840
Epoch 33: val_accuracy did not improve from 0.37222
45/45 [==============================] - 6s 143ms/step - loss: 0.9062 - accuracy: 0.6840 - val_loss: 102.1351 - val_accuracy: 0.3028
Epoch 34/100
45/45 [==============================] - ETA: 0s - loss: 0.8220 - accuracy: 0.7118
Epoch 34: val_accuracy did not improve from 0.37222
45/45 [==============================] - 7s 144ms/step - loss: 0.8220 - accuracy: 0.7118 - val_loss: 3.1855 - val_accuracy: 0.2583
Epoch 35/100
45/45 [==============================] - ETA: 0s - loss: 0.7424 - accuracy: 0.7431
Epoch 35: val_accuracy did not improve from 0.37222
45/45 [==============================] - 7s 144ms/step - loss: 0.7424 - accuracy: 0.7431 - val_loss: 34309.0664 - val_accuracy: 0.3028
Epoch 36/100
45/45 [==============================] - ETA: 0s - loss: 0.7257 - accuracy: 0.7535
Epoch 36: val_accuracy did not improve from 0.37222
45/45 [==============================] - 6s 144ms/step - loss: 0.7257 - accuracy: 0.7535 - val_loss: 89.2148 - val_accuracy: 0.2361
Epoch 37/100
45/45 [==============================] - ETA: 0s - loss: 0.6695 - accuracy: 0.7799
Epoch 37: val_accuracy did not improve from 0.37222
45/45 [==============================] - 7s 146ms/step - loss: 0.6695 - accuracy: 0.7799 - val_loss: 3590.8940 - val_accuracy: 0.1889
Epoch 38/100
45/45 [==============================] - ETA: 0s - loss: 0.5841 - accuracy: 0.7917
Epoch 38: val_accuracy did not improve from 0.37222
45/45 [==============================] - 7s 145ms/step - loss: 0.5841 - accuracy: 0.7917 - val_loss: 5.1283 - val_accuracy: 0.2222
Epoch 39/100
45/45 [==============================] - ETA: 0s - loss: 0.5989 - accuracy: 0.7840
Epoch 39: val_accuracy did not improve from 0.37222
45/45 [==============================] - 7s 145ms/step - loss: 0.5989 - accuracy: 0.7840 - val_loss: 3.7647 - val_accuracy: 0.2833
Epoch 40/100
45/45 [==============================] - ETA: 0s - loss: 0.5431 - accuracy: 0.8181
Epoch 40: val_accuracy did not improve from 0.37222
45/45 [==============================] - 7s 144ms/step - loss: 0.5431 - accuracy: 0.8181 - val_loss: 3.9703 - val_accuracy: 0.3028
Epoch 41/100
45/45 [==============================] - ETA: 0s - loss: 0.4810 - accuracy: 0.8333
Epoch 41: val_accuracy improved from 0.37222 to 0.40278, saving model to best_model.h5
45/45 [==============================] - 7s 147ms/step - loss: 0.4810 - accuracy: 0.8333 - val_loss: 2.7934 - val_accuracy: 0.4028
Epoch 42/100
45/45 [==============================] - ETA: 0s - loss: 0.5016 - accuracy: 0.8278
Epoch 42: val_accuracy did not improve from 0.40278
45/45 [==============================] - 7s 145ms/step - loss: 0.5016 - accuracy: 0.8278 - val_loss: 58485.9453 - val_accuracy: 0.2583
Epoch 43/100
45/45 [==============================] - ETA: 0s - loss: 0.4782 - accuracy: 0.8424
Epoch 43: val_accuracy did not improve from 0.40278
45/45 [==============================] - 7s 144ms/step - loss: 0.4782 - accuracy: 0.8424 - val_loss: 3.6065 - val_accuracy: 0.3694
Epoch 44/100
45/45 [==============================] - ETA: 0s - loss: 0.3587 - accuracy: 0.8785
Epoch 44: val_accuracy did not improve from 0.40278
45/45 [==============================] - 7s 145ms/step - loss: 0.3587 - accuracy: 0.8785 - val_loss: 5.5882 - val_accuracy: 0.3806
Epoch 45/100
45/45 [==============================] - ETA: 0s - loss: 0.3143 - accuracy: 0.8889
Epoch 45: val_accuracy did not improve from 0.40278
45/45 [==============================] - 7s 145ms/step - loss: 0.3143 - accuracy: 0.8889 - val_loss: 2.7883 - val_accuracy: 0.3861
Epoch 46/100
45/45 [==============================] - ETA: 0s - loss: 0.3707 - accuracy: 0.8757
Epoch 46: val_accuracy did not improve from 0.40278
45/45 [==============================] - 7s 145ms/step - loss: 0.3707 - accuracy: 0.8757 - val_loss: 3.2097 - val_accuracy: 0.3583
Epoch 47/100
45/45 [==============================] - ETA: 0s - loss: 0.3418 - accuracy: 0.8799
Epoch 47: val_accuracy did not improve from 0.40278
45/45 [==============================] - 6s 144ms/step - loss: 0.3418 - accuracy: 0.8799 - val_loss: 3.1672 - val_accuracy: 0.4028
Epoch 48/100
45/45 [==============================] - ETA: 0s - loss: 0.3202 - accuracy: 0.8931
Epoch 48: val_accuracy did not improve from 0.40278
45/45 [==============================] - 7s 145ms/step - loss: 0.3202 - accuracy: 0.8931 - val_loss: 16.9275 - val_accuracy: 0.3944
Epoch 49/100
45/45 [==============================] - ETA: 0s - loss: 0.2668 - accuracy: 0.9118
Epoch 49: val_accuracy improved from 0.40278 to 0.41944, saving model to best_model.h5
45/45 [==============================] - 7s 147ms/step - loss: 0.2668 - accuracy: 0.9118 - val_loss: 2.8230 - val_accuracy: 0.4194
Epoch 50/100
45/45 [==============================] - ETA: 0s - loss: 0.2676 - accuracy: 0.9021
Epoch 50: val_accuracy did not improve from 0.41944
45/45 [==============================] - 7s 144ms/step - loss: 0.2676 - accuracy: 0.9021 - val_loss: 2671.1196 - val_accuracy: 0.3639
Epoch 51/100
45/45 [==============================] - ETA: 0s - loss: 0.2152 - accuracy: 0.9306
Epoch 51: val_accuracy improved from 0.41944 to 0.45556, saving model to best_model.h5
45/45 [==============================] - 7s 147ms/step - loss: 0.2152 - accuracy: 0.9306 - val_loss: 2.5370 - val_accuracy: 0.4556
Epoch 52/100
45/45 [==============================] - ETA: 0s - loss: 0.1308 - accuracy: 0.9611
Epoch 52: val_accuracy did not improve from 0.45556
45/45 [==============================] - 7s 144ms/step - loss: 0.1308 - accuracy: 0.9611 - val_loss: 2.9426 - val_accuracy: 0.4444
Epoch 53/100
45/45 [==============================] - ETA: 0s - loss: 0.1306 - accuracy: 0.9556
Epoch 53: val_accuracy did not improve from 0.45556
45/45 [==============================] - 7s 145ms/step - loss: 0.1306 - accuracy: 0.9556 - val_loss: 3.2494 - val_accuracy: 0.3917
Epoch 54/100
45/45 [==============================] - ETA: 0s - loss: 0.1515 - accuracy: 0.9500
Epoch 54: val_accuracy did not improve from 0.45556
45/45 [==============================] - 6s 144ms/step - loss: 0.1515 - accuracy: 0.9500 - val_loss: 4461.8813 - val_accuracy: 0.3611
Epoch 55/100
45/45 [==============================] - ETA: 0s - loss: 0.2079 - accuracy: 0.9285
Epoch 55: val_accuracy did not improve from 0.45556
45/45 [==============================] - 6s 144ms/step - loss: 0.2079 - accuracy: 0.9285 - val_loss: 4.7424 - val_accuracy: 0.3917
Epoch 56/100
45/45 [==============================] - ETA: 0s - loss: 0.2407 - accuracy: 0.9076
Epoch 56: val_accuracy did not improve from 0.45556
45/45 [==============================] - 7s 145ms/step - loss: 0.2407 - accuracy: 0.9076 - val_loss: 3.3555 - val_accuracy: 0.3889
Epoch 57/100
45/45 [==============================] - ETA: 0s - loss: 0.1948 - accuracy: 0.9333
Epoch 57: val_accuracy did not improve from 0.45556
45/45 [==============================] - 7s 145ms/step - loss: 0.1948 - accuracy: 0.9333 - val_loss: 3.4168 - val_accuracy: 0.3861
Epoch 58/100
45/45 [==============================] - ETA: 0s - loss: 0.1534 - accuracy: 0.9431
Epoch 58: val_accuracy improved from 0.45556 to 0.47222, saving model to best_model.h5
45/45 [==============================] - 7s 146ms/step - loss: 0.1534 - accuracy: 0.9431 - val_loss: 2.7895 - val_accuracy: 0.4722
Epoch 59/100
45/45 [==============================] - ETA: 0s - loss: 0.1457 - accuracy: 0.9549
Epoch 59: val_accuracy did not improve from 0.47222
45/45 [==============================] - 7s 144ms/step - loss: 0.1457 - accuracy: 0.9549 - val_loss: 6.3610 - val_accuracy: 0.3444
Epoch 60/100
45/45 [==============================] - ETA: 0s - loss: 0.2078 - accuracy: 0.9306
Epoch 60: val_accuracy did not improve from 0.47222
45/45 [==============================] - 7s 145ms/step - loss: 0.2078 - accuracy: 0.9306 - val_loss: 3.5834 - val_accuracy: 0.4056
Epoch 61/100
45/45 [==============================] - ETA: 0s - loss: 0.2005 - accuracy: 0.9361
Epoch 61: val_accuracy did not improve from 0.47222
45/45 [==============================] - 7s 144ms/step - loss: 0.2005 - accuracy: 0.9361 - val_loss: 4.0683 - val_accuracy: 0.3861
Epoch 62/100
45/45 [==============================] - ETA: 0s - loss: 0.1815 - accuracy: 0.9375
Epoch 62: val_accuracy did not improve from 0.47222
45/45 [==============================] - 7s 145ms/step - loss: 0.1815 - accuracy: 0.9375 - val_loss: 3.1445 - val_accuracy: 0.4611
Epoch 63/100
45/45 [==============================] - ETA: 0s - loss: 0.1027 - accuracy: 0.9722
Epoch 63: val_accuracy did not improve from 0.47222
45/45 [==============================] - 7s 145ms/step - loss: 0.1027 - accuracy: 0.9722 - val_loss: 3.0654 - val_accuracy: 0.4500
Epoch 64/100
45/45 [==============================] - ETA: 0s - loss: 0.1370 - accuracy: 0.9535
Epoch 64: val_accuracy did not improve from 0.47222
45/45 [==============================] - 7s 145ms/step - loss: 0.1370 - accuracy: 0.9535 - val_loss: 3.1589 - val_accuracy: 0.4667
Epoch 65/100
45/45 [==============================] - ETA: 0s - loss: 0.1530 - accuracy: 0.9576
Epoch 65: val_accuracy did not improve from 0.47222
45/45 [==============================] - 7s 145ms/step - loss: 0.1530 - accuracy: 0.9576 - val_loss: 19.4580 - val_accuracy: 0.3722
Epoch 66/100
45/45 [==============================] - ETA: 0s - loss: 0.1092 - accuracy: 0.9625
Epoch 66: val_accuracy did not improve from 0.47222
45/45 [==============================] - 6s 143ms/step - loss: 0.1092 - accuracy: 0.9625 - val_loss: 263474.1250 - val_accuracy: 0.2639
Epoch 67/100
45/45 [==============================] - ETA: 0s - loss: 0.1094 - accuracy: 0.9639
Epoch 67: val_accuracy did not improve from 0.47222
45/45 [==============================] - 7s 144ms/step - loss: 0.1094 - accuracy: 0.9639 - val_loss: 50495.4219 - val_accuracy: 0.4222
Epoch 68/100
45/45 [==============================] - ETA: 0s - loss: 0.0843 - accuracy: 0.9694
Epoch 68: val_accuracy improved from 0.47222 to 0.47500, saving model to best_model.h5
45/45 [==============================] - 7s 145ms/step - loss: 0.0843 - accuracy: 0.9694 - val_loss: 20.9734 - val_accuracy: 0.4750
Epoch 69/100
45/45 [==============================] - ETA: 0s - loss: 0.1767 - accuracy: 0.9458
Epoch 69: val_accuracy did not improve from 0.47500
45/45 [==============================] - 7s 145ms/step - loss: 0.1767 - accuracy: 0.9458 - val_loss: 1322.2261 - val_accuracy: 0.3583
Epoch 70/100
45/45 [==============================] - ETA: 0s - loss: 0.1305 - accuracy: 0.9479
Epoch 70: val_accuracy did not improve from 0.47500
45/45 [==============================] - 7s 144ms/step - loss: 0.1305 - accuracy: 0.9479 - val_loss: 4.3810 - val_accuracy: 0.3889
Epoch 71/100
45/45 [==============================] - ETA: 0s - loss: 0.1202 - accuracy: 0.9569
Epoch 71: val_accuracy did not improve from 0.47500
45/45 [==============================] - 7s 144ms/step - loss: 0.1202 - accuracy: 0.9569 - val_loss: 144.1233 - val_accuracy: 0.1361
Epoch 72/100
45/45 [==============================] - ETA: 0s - loss: 0.0746 - accuracy: 0.9785
Epoch 72: val_accuracy did not improve from 0.47500
45/45 [==============================] - 7s 144ms/step - loss: 0.0746 - accuracy: 0.9785 - val_loss: 3.0208 - val_accuracy: 0.4417
Epoch 73/100
45/45 [==============================] - ETA: 0s - loss: 0.1549 - accuracy: 0.9542
Epoch 73: val_accuracy did not improve from 0.47500
45/45 [==============================] - 7s 145ms/step - loss: 0.1549 - accuracy: 0.9542 - val_loss: 4.0066 - val_accuracy: 0.4333
Epoch 74/100
45/45 [==============================] - ETA: 0s - loss: 0.1743 - accuracy: 0.9444
Epoch 74: val_accuracy did not improve from 0.47500
45/45 [==============================] - 7s 145ms/step - loss: 0.1743 - accuracy: 0.9444 - val_loss: 373.7328 - val_accuracy: 0.4250
Epoch 75/100
45/45 [==============================] - ETA: 0s - loss: 0.1104 - accuracy: 0.9611
Epoch 75: val_accuracy did not improve from 0.47500
45/45 [==============================] - 7s 145ms/step - loss: 0.1104 - accuracy: 0.9611 - val_loss: 4.0707 - val_accuracy: 0.4222
Epoch 76/100
45/45 [==============================] - ETA: 0s - loss: 0.1021 - accuracy: 0.9639
Epoch 76: val_accuracy did not improve from 0.47500
45/45 [==============================] - 6s 144ms/step - loss: 0.1021 - accuracy: 0.9639 - val_loss: 4.0057 - val_accuracy: 0.3944
Epoch 77/100
45/45 [==============================] - ETA: 0s - loss: 0.1100 - accuracy: 0.9618
Epoch 77: val_accuracy did not improve from 0.47500
45/45 [==============================] - 6s 143ms/step - loss: 0.1100 - accuracy: 0.9618 - val_loss: 4.1805 - val_accuracy: 0.4389
Epoch 78/100
45/45 [==============================] - ETA: 0s - loss: 0.0505 - accuracy: 0.9847
Epoch 78: val_accuracy did not improve from 0.47500
45/45 [==============================] - 7s 145ms/step - loss: 0.0505 - accuracy: 0.9847 - val_loss: 3.7758 - val_accuracy: 0.4750
Epoch 78: early stopping

4、结果展示和预测

1、结果展示

acc = history.history['accuracy']
val_acc = history.history['val_accuracy']

loss = history.history['loss']
val_loss = history.history['val_loss']

epochs_range = range(len(loss))

plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()

在这里插入图片描述

  • loss这个没办法,我试了好几次,都会有那么一次会很大,我之前用pytorch没有遇到这种情况。这次训练最好的结果是:loss: 0.0505 - accuracy: 0.9847 - val_loss: 3.7758 - val_accuracy: 0.4750
  • 验证集准确率一直上不去,本人感觉如果结果ResNet会更好,这次本人感觉有很多原因,比如说图片训练少,模型结果计算量有点大了,可以结合ResNet进行优化,这个就后面在更新吧。

2、预测

from PIL import Image 

# 加载模型权重
model.load_weights('best_model.h5')

# 加载图片
img = Image.open("./data/Brad Pitt/001_c04300ef.jpg")
image = tf.image.resize(img, [256, 256])

img_array = tf.expand_dims(image, 0)  # 新插入一个元素

predict = model.predict(img_array)
print("预测结果: ", classnames[np.argmax(predict)])
1/1 [==============================] - 0s 384ms/step
预测结果:  Brad Pitt

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/906433.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

系统架构图设计(行业领域架构)

物联网 感知层:主要功能是感知和收集信息。感知层通过各种传感器、RFID标签等设备来识别物体、采集信息,并对这些信息进行初步处理。这一层的作用是实现对物理世界的感知和初步处理,为上层提供数据基础网络层:网络层负责处理和传输…

PySpark 本地开发环境搭建与实践

目录 一、PySpark 本地开发环境搭建 (一)Windows 本地 JDK 和 Hadoop 的安装 (二)Windows 安装 Anaconda (三)Anaconda 中安装 PySpark (四)Pycharm 中创建工程 二、编写代码 …

基于Python的自然语言处理系列(51):Weight Quantization

浮点数表示简介 浮点数的设计允许表示范围广泛的数值,同时保持一定的精度。浮点数表示的基本公式为: 在深度学习中,常见的浮点数格式有:float32(FP32)、float16(FP16)和bfloat16(BF16)。每种格式的具体特性如下: 格式总位数符号位指数位数尾数位数精度计算成…

c++:vector模拟实现

一、vector成员变量 库里实现用的就是这三个成员变量&#xff0c;咱们实现跟库里一样&#xff0c; namespace myvector {template<class T>class vector{public://vecttor的迭代器是原生指针typedef T* iterator;typedef const T* const_iterator; private:iterator _sta…

【Linux系列】Linux 和 Unix 系统中的`set`命令与错误处理

&#x1f49d;&#x1f49d;&#x1f49d;欢迎来到我的博客&#xff0c;很高兴能够在这里和您见面&#xff01;希望您在这里可以感受到一份轻松愉快的氛围&#xff0c;不仅可以获得有趣的内容和知识&#xff0c;也可以畅所欲言、分享您的想法和见解。 推荐:kwan 的首页,持续学…

抗疫物资管理:SpringBoot技术应用案例

目 录 摘 要 1 前 言 2 第1章 概述 2 1.1 研究背景 3 1.2 研究目的 3 1.3 研究内容 4 第二章 开发技术介绍 5 2.1相关技术 5 2.2 Java技术 6 2.3 MySQL数据库 6 2.4 Tomcat介绍 7 2.5 Spring Boot框架 8 第三章 系统分析 9 3.1 可行性分析 9 3.1.1 技术可行性 9 3.1.2 经济可行…

机器学习实战(一)机器学习基础

"我不断地告诉大家&#xff0c;未来十年最热门的职业是统计学家。很多人认为我是开玩笑&#xff0c; 谁又能想到计算机工程师会是20世纪90年代最诱人的职业呢&#xff1f;如何解释数据、处理数据、从中抽取价值、展示和交流数据结果&#xff0c;在未来十年将是最重要的职业…

树叶分类竞赛(Baseline)以及kaggle的GPU使用

树叶分类竞赛(Baseline)-kaggle的GPU使用 文章目录 树叶分类竞赛(Baseline)-kaggle的GPU使用竞赛的步骤代码实现创建自定义dataset定义data_loader模型定义超参数训练模型预测和保存结果 kaggle使用 竞赛的步骤 本文来自于Neko Kiku提供的Baseline&#xff0c;感谢大佬提供代码…

奥数与C++小学四年级(第十一题 试商)

参考程序代码&#xff1a; #include <iostream> using namespace std; int main() { int dividend 2023; int count 0; // 余数从0开始遍历到被除数 for (int remainder 0; remainder < dividend; remainder) { int divisor remainder 2; // 计算商 if…

GPT-Sovits-2-微调模型

1. 大致步骤 上一步整理完数据集后&#xff0c;此步输入数据, 微调2个模型VITS和GPT&#xff0c;位置在 <<1-GPT-SoVITS-tts>>下的<<1B-微调训练>> 页面的两个按钮分别执行两个文件: <./GPT_SoVITS/s2_train.py> 这一步微调VITS的预训练模型…

关于前端程序员使用Idea快捷键配置的说明

相信很多前端程序员 转到后端第一件事就是安装Idea然后学习java&#xff0c;在这里面最难的不是java的语法&#xff0c;而是关于快捷键的修改&#xff0c;前端程序员用的最多的估计就是VsCode或者Webstorm&#xff0c;就拿我自己举例我经常使用Vscode&#xff0c;当我写完代码下…

ubuntu运行gazebo导致内存越来越少

1.用vscode看代码会一直有没用的日志缓存&#xff0c;可以删掉&#xff08;文件夹留着&#xff0c;可以把里面的东西删掉&#xff09; 2.运行gazebo的模型会有很多缓存文件&#xff0c;可以删掉 log文件夹非常大

视频智能分析平台LiteAIServer入侵检测算法平台部署行人入侵检测算法:智能安防的新利器

在当今数字化时代&#xff0c;安全防护成为了社会各界高度关注的重要议题。随着人工智能技术的不断发展&#xff0c;视频智能分析平台LiteAIServer 行人入侵检测算法应运而生&#xff0c;为安防领域带来了全新的突破与变革。 视频智能分析平台LiteAIServer 行人入侵检测算法是基…

架构师备考-非关系型数据库

基础理论 CAP 理论 C&#xff08;Consistency&#xff09;一致性。一致性是指更新操作成功并返回客户端完成后&#xff0c;所有的节点在同一时间的数据完全一致&#xff0c;与ACID 的 C 完全不同。A &#xff08;Availability&#xff09;可用性。可用性是指服务一直可用&…

七、k8s快速入门之资源控制器

文章目录 :star: RC:star: Deployment:three: create 和 apply的区别 :star: DaemonSet:star: job&&CronJob ⭐️ RC 1️⃣ 查看版本 kubect api-resource 或 kubect explain rs2️⃣ 编写Yaml文档 [rootmaster ~]# vim pod/rc.yaml apiVersion: extensions/v1beta1…

FreeRTOS移植到STM32F103C8T6(HAL库)

目录 一、将STM32F103ZET6代码变更成STM32F103C8T6 二、 将FreeRTOS码源添加到文件 三、代码更改适配 四、测试 一、将STM32F103ZET6代码变更成STM32F103C8T6 点击魔法棒&#xff0c;点击Device&#xff0c;选择芯片为STM32F103C8T6 进行编译&#xff0c;无报错无警告&am…

Nginx 的 Http 模块介绍(上)

Nginx 的 Http 模块介绍&#xff08;上&#xff09; 1. http 请求 11 个处理阶段介绍 Nginx 将一个 Http 请求分成多个阶段&#xff0c;以模块为单位进行处理。其将 Http请求的处理过程分成了 11 个阶段&#xff0c;各个阶段可以包含任意多个 Http 的模块并以流水线的方式处理…

六西格玛项目助力,手术机器人零部件国产化稳中求胜——张驰咨询

项目背景 XR-1000型腔镜手术机器人是某头部手术机器人企业推出的高端手术设备&#xff0c;专注于微创手术领域&#xff0c;具有高度的精确性和稳定性。而XR-1000型机器人使用的部分核心零部件长期依赖进口&#xff0c;特别是高精度电机、关节执行机构和视觉系统等&#xff0c;…

基于Python爬虫与文本挖掘的网络舆情监控系统【附源码】

基于Python爬虫与文本挖掘的网络舆情监控系统 效果如下&#xff1a; 系统登录界面 注册页面界面 管理员主界面 用户界面 网络舆情管理界面 看板详细页面 系统简介界面 用户主界面 网络舆情界面 研究背景 随着网络空间舆论的日益活跃&#xff0c;其对社会事件的影响愈发显著。…

光影重塑 艺术无界——中央美术学院国际学院与北京曦烽摄影学院联展启幕

10月28日&#xff0c;中央美术学院国际学院与北京曦烽摄影学院联合举办的《重塑》摄影展&#xff0c;在中央美术学院国际学院艺术空间启幕。展览旨在打破传统“时尚摄影”的话语界限&#xff0c;通过镜头展现时尚的更多维度&#xff0c;既关注视觉美感&#xff0c;更深入挖掘时…