GAN生成对抗模型根据minist数据集生成手写数字图片

文章目录

  • 1.项目介绍
  • 2相关网站
  • 3具体的代码及结果
    • 导入工具包
    • 设置超参数
    • 定义优化器,以及损失函数
    • 训练时的迭代过程
    • 训练结果的展示

1.项目介绍

 通过用minist数据集进行训练,得到一个GAN模型,可以生成与minist数据集类似的图片。
GAN是一种生成模型,它的目的是通过学习真实数据的分布来生成新的数据。GAN由两个网络组成,一个是生成器(Generator),一个是判别器(Discriminator)。生成器的任务是从随机噪声中生成类似于真实数据的样本,判别器的任务是判断给定的样本是真实的还是生成的。GAN的训练过程可以看作是一种对抗博弈,生成器和判别器互相竞争,不断提高自己的能力,最终达到生成器生成的样本和真实数据分布一致,判别器无法区分真假的状态。通过GAN我们可以生成足以以假乱真的图像,GAN被广泛的应用在图像生成,语音生成等场景中。例如经典的换脸应用DeepFakes背后的技术便是GAN.
生成对抗网络的组成:
   生成器网络 、判别器网络
目标:
   总体目标:
     生成模型,根据已有的图片,生成与已有的图片类似的图片
   训练目标:
    判别器
     能够正确的识别真的图片
     能够正确的识别假的图片
    生成器
     能够生成的能被判别器判断为真的图片

本文是在google drive中部署的,部署过程参考博客:colab部署过程

2相关网站

参考的网站:github pytorch代码
所有的代码文件:提取码:f3vq

3具体的代码及结果

导入工具包

import os
import torch
import torchvision
import torch.nn as nn
from torchvision import transforms
from torchvision.utils import save_image
import os
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import torch
from torch import nn
import torch.optim as optim
import torchvision
#pip install torchvision
from torchvision import transforms, models, datasets
#https://pytorch.org/docs/stable/torchvision/index.html
import imageio
import time
import warnings
import random
import sys
import copy
import json
from PIL import Image

设置超参数

# Device configuration
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

# Hyper-parameters
latent_size = 64
hidden_size = 256
image_size = 784
num_epochs = 200
batch_size = 100
# latent_size:这是潜在向量的大小,用作生成器网络的输入以生成假图像。
#latent_size 的大小会影响生成图像的多样性和质量。如果 latent_size 太小,则生成器可能无法捕获数据分布的所有变化,从而导致生成图像缺乏多样性。如果 latent_size 太大,则生成器可能会过拟合训练数据,从而导致生成图像质量下降。
# hidden_size:这是隐藏层的大小,用于定义生成器和鉴别器网络中隐藏层的大小。
# image_size:这是图像的大小,表示图像的像素数。在这种情况下,图像被重塑为一维张量,因此图像大小等于图像的长度。
# num_epochs:这是训练期间整个数据集通过网络的次数。
# batch_size:每批次中图像的数量。
sample_dir = 'samples'

# Create a directory if not exists
#创建一个sample数据集用来存储数据,一个是minist中的真实的图像,以一个是我们的生成器生成的图像
if not os.path.exists(sample_dir):
    os.makedirs(sample_dir)
#transforms.ToTensor()是一个函数,它可以将PIL.Image或者numpy.ndarray格式的图像转换为torch.FloatTensor格式的张量,并且将像素值范围缩放到[0, 1]之间。
#transforms.Normalize(mean, std)是一个类,它可以对张量图像进行标准化,即减去给定的均值mean并除以给定的标准差std。这样做可以使得图像的分布接近标准正态分布,有利于模型的训练和收敛。
transform = transforms.Compose([
                transforms.ToTensor(),
                transforms.Normalize(mean=[0.5],   # 1 for greyscale channels
                                     std=[0.5])])
#在 Colab 文件系统的 /content/drive/ 目录下挂载您的 Google Drive
from google.colab import drive
drive.mount('/content/drive/')
# 指定当前的工作文件夹
import os
# 此处为google drive中的文件路径,drive为之前指定的工作根目录,要加上
os.chdir("/content/drive/MyDrive/gan/")
# MNIST dataset
mnist = torchvision.datasets.MNIST(root='./data',
                                   train=True,
                                   transform=transform,
                                   download=True)

# Data loader
data_loader = torch.utils.data.DataLoader(dataset=mnist,
                                          batch_size=batch_size,
                                          shuffle=True)
#定义判别器
# Discriminator
D = nn.Sequential(
    nn.Linear(image_size, hidden_size),
    #该函数相比于ReLU,保留了一些负轴的值,缓解了激活值过小而导致神经元参数无法更新的问题,其中α\alphaα默认0.01。
    nn.LeakyReLU(0.2),
    nn.Linear(hidden_size, hidden_size),
    nn.LeakyReLU(0.2),
    nn.Linear(hidden_size, 1),
    #将值映射到0~1
    nn.Sigmoid())
#定义生成器
# Generator
G = nn.Sequential(
    nn.Linear(latent_size, hidden_size),
    nn.ReLU(),
    nn.Linear(hidden_size, hidden_size),
    nn.ReLU(),
    nn.Linear(hidden_size, image_size),
    #将值映射到-1~1
    nn.Tanh())
# Device setting
D = D.to(device)
G = G.to(device)

定义优化器,以及损失函数


# Binary cross entropy loss and optimizer
#nn.BCELoss()函数是二分类交叉熵损失函数,用于计算二分类问题中的交叉熵损失
criterion = nn.BCELoss()
d_optimizer = torch.optim.Adam(D.parameters(), lr=0.0002)
g_optimizer = torch.optim.Adam(G.parameters(), lr=0.0002)
#定义函数将生成器生成的图像的像素值的范围转化为0~1
# denorm函数的作用是将输入张量的值从[-1,1]范围转换为[0,1]范围。
def denorm(x):
    out = (x + 1) / 2
    return out.clamp(0, 1)
# reset_grad函数的作用是将判别器和生成器的梯度清零,以便进行下一次反向传播
def reset_grad():
    d_optimizer.zero_grad()
    g_optimizer.zero_grad()
    
# Start training
total_step = len(data_loader)
for epoch in range(num_epochs):
    for i, (images, _) in enumerate(data_loader):
        images = images.reshape(batch_size, -1).to(device)

        # Create the labels which are later used as input for the BCE loss
        real_labels = torch.ones(batch_size, 1).to(device)
        fake_labels = torch.zeros(batch_size, 1).to(device)

        # ================================================================== #
        #                      Train the discriminator                       #
        # ================================================================== #
		#训练判别器,让判别器能够对真实的和虚假的图片都进行判断
        # Compute BCE_Loss using real images where BCE_Loss(x, y): - y * log(D(x)) - (1-y) * log(1 - D(x))
        # Second term of the loss is always zero since real_labels == 1
        outputs = D(images)
        #计算对于真实样本的损失
        #真实标签列表是作为BCELoss公式中的y参数,它和预测值x一起计算交叉熵。
        #具体来说,当y为1时,损失值为-log(x),
        #当y为0时,损失值为-log(1-x)。
        #这样可以保证当预测值x和真实标签y一致时,损失值最小,当预测值x和真实标签y相反时,损失值最大。
        d_loss_real = criterion(outputs, real_labels)
        real_score = outputs

        # Compute BCELoss using fake images
        # First term of the loss is always zero since fake_labels == 0
        z = torch.randn(batch_size, latent_size).to(device)
        fake_images = G(z)
        outputs = D(fake_images)
        #计算对于虚假样本的损失
        d_loss_fake = criterion(outputs, fake_labels)
        fake_score = outputs

        # Backprop and optimize
        #总损失为判断两个判断错误的损失,既我们希望找到一个对于正确错误的样本都能进行判断正确的判别器
        d_loss = d_loss_real + d_loss_fake
        reset_grad()
        d_loss.backward()
        d_optimizer.step()

        # ================================================================== #
        #                        Train the generator                         #
        # ================================================================== #

        # Compute loss with fake images
        #随机生成一个输入,也就是虚假的图片
        z = torch.randn(batch_size, latent_size).to(device)
        #根据虚假输入,生成一个图片
        fake_images = G(z)
        #给判别器判断
        outputs = D(fake_images)

        # We train G to maximize log(D(G(z)) instead of minimizing log(1-D(G(z)))
        # For the reason, see the last paragraph of section 3. https://arxiv.org/pdf/1406.2661.pdf
        #计算生成器的损失,我们希望,生成的图像能和真实的图像类似,所以这里y取1,只考虑-ylog(p(x)),也就是判断正确为真的损失
        g_loss = criterion(outputs, real_labels)

        # Backprop and optimize
        reset_grad()
        g_loss.backward()
        g_optimizer.step()

        if (i+1) % 200 == 0:
            print('Epoch [{}/{}], Step [{}/{}], d_loss: {:.4f}, g_loss: {:.4f}, D(x): {:.2f}, D(G(z)): {:.2f}'
                  .format(epoch, num_epochs, i+1, total_step, d_loss.item(), g_loss.item(),
                          real_score.mean().item(), fake_score.mean().item()))

    # Save real images
    if (epoch+1) == 1:
        images = images.reshape(images.size(0), 1, 28, 28)
        save_image(denorm(images), os.path.join(sample_dir, 'real_images.png'))

    # Save sampled images
    #每一次迭代,我们保存我们的生成器能生成的虚假的图像,看看我们的生成器生成的图片在逐步的真实话的过程
    fake_images = fake_images.reshape(fake_images.size(0), 1, 28, 28)
    save_image(denorm(fake_images), os.path.join(sample_dir, 'fake_images-{}.png'.format(epoch+1)))

训练时的迭代过程

我们可以看到其中每次迭代的过程中
g_loss:生成器的损失在下降(说明生成器生成的图像越来越能被判别器识别为真实的图像)
D(G(z)): 判别器对于生成器生成的样本识别真样本的概率在上升(说明生成器的效果在不断提高,逐渐的可以生成真实的样本)

Epoch [0/200], Step [200/600], d_loss: 0.0252, g_loss: 5.2554, D(x): 1.00, D(G(z)): 0.02
Epoch [0/200], Step [400/600], d_loss: 0.1340, g_loss: 4.9320, D(x): 0.95, D(G(z)): 0.07
Epoch [0/200], Step [600/600], d_loss: 0.2178, g_loss: 4.7700, D(x): 0.93, D(G(z)): 0.07
Epoch [1/200], Step [200/600], d_loss: 0.3207, g_loss: 2.5174, D(x): 0.88, D(G(z)): 0.07
Epoch [1/200], Step [400/600], d_loss: 1.6857, g_loss: 3.8114, D(x): 0.64, D(G(z)): 0.30
Epoch [1/200], Step [600/600], d_loss: 1.2818, g_loss: 2.4387, D(x): 0.79, D(G(z)): 0.39
Epoch [2/200], Step [200/600], d_loss: 0.3939, g_loss: 3.4207, D(x): 0.91, D(G(z)): 0.19
Epoch [2/200], Step [400/600], d_loss: 0.2278, g_loss: 2.5563, D(x): 0.93, D(G(z)): 0.11
Epoch [2/200], Step [600/600], d_loss: 0.6092, g_loss: 4.0915, D(x): 0.83, D(G(z)): 0.19
Epoch [3/200], Step [200/600], d_loss: 0.2883, g_loss: 3.6595, D(x): 0.87, D(G(z)): 0.07
Epoch [3/200], Step [400/600], d_loss: 0.5897, g_loss: 2.6272, D(x): 0.81, D(G(z)): 0.15
Epoch [3/200], Step [600/600], d_loss: 0.8164, g_loss: 2.3492, D(x): 0.81, D(G(z)): 0.31
Epoch [4/200], Step [200/600], d_loss: 0.7031, g_loss: 2.1948, D(x): 0.84, D(G(z)): 0.25
Epoch [4/200], Step [400/600], d_loss: 0.1647, g_loss: 3.7823, D(x): 0.98, D(G(z)): 0.08
Epoch [4/200], Step [600/600], d_loss: 0.1939, g_loss: 3.5391, D(x): 0.91, D(G(z)): 0.05
Epoch [5/200], Step [200/600], d_loss: 0.1912, g_loss: 3.8443, D(x): 0.92, D(G(z)): 0.05
Epoch [5/200], Step [400/600], d_loss: 0.1662, g_loss: 3.8918, D(x): 0.97, D(G(z)): 0.11
Epoch [5/200], Step [600/600], d_loss: 0.3716, g_loss: 3.8777, D(x): 0.87, D(G(z)): 0.06
Epoch [6/200], Step [200/600], d_loss: 0.5468, g_loss: 3.8969, D(x): 0.85, D(G(z)): 0.10
Epoch [6/200], Step [400/600], d_loss: 0.1739, g_loss: 3.8602, D(x): 0.94, D(G(z)): 0.04
Epoch [6/200], Step [600/600], d_loss: 0.2472, g_loss: 4.8380, D(x): 0.91, D(G(z)): 0.04
Epoch [7/200], Step [200/600], d_loss: 0.2011, g_loss: 4.5990, D(x): 0.92, D(G(z)): 0.06
Epoch [7/200], Step [400/600], d_loss: 0.3489, g_loss: 5.6411, D(x): 0.90, D(G(z)): 0.06
Epoch [7/200], Step [600/600], d_loss: 0.3301, g_loss: 3.0097, D(x): 0.93, D(G(z)): 0.16
Epoch [8/200], Step [200/600], d_loss: 0.3328, g_loss: 3.4204, D(x): 0.92, D(G(z)): 0.13
Epoch [8/200], Step [400/600], d_loss: 0.1620, g_loss: 3.1926, D(x): 0.96, D(G(z)): 0.09
Epoch [8/200], Step [600/600], d_loss: 0.2524, g_loss: 3.4148, D(x): 0.98, D(G(z)): 0.18
Epoch [9/200], Step [200/600], d_loss: 0.1735, g_loss: 3.2235, D(x): 0.95, D(G(z)): 0.06
Epoch [9/200], Step [400/600], d_loss: 0.1905, g_loss: 4.0506, D(x): 0.94, D(G(z)): 0.07
Epoch [9/200], Step [600/600], d_loss: 0.1437, g_loss: 4.8081, D(x): 0.96, D(G(z)): 0.08
Epoch [10/200], Step [200/600], d_loss: 0.0497, g_loss: 5.0490, D(x): 0.98, D(G(z)): 0.02
Epoch [10/200], Step [400/600], d_loss: 0.0652, g_loss: 5.5561, D(x): 0.97, D(G(z)): 0.01
Epoch [10/200], Step [600/600], d_loss: 0.1745, g_loss: 4.9265, D(x): 0.93, D(G(z)): 0.03
Epoch [11/200], Step [200/600], d_loss: 0.1788, g_loss: 4.2305, D(x): 0.95, D(G(z)): 0.07
Epoch [11/200], Step [400/600], d_loss: 0.0884, g_loss: 4.9210, D(x): 0.97, D(G(z)): 0.03
Epoch [11/200], Step [600/600], d_loss: 0.4536, g_loss: 5.0761, D(x): 0.91, D(G(z)): 0.18
Epoch [12/200], Step [200/600], d_loss: 0.3983, g_loss: 5.9821, D(x): 0.93, D(G(z)): 0.13
Epoch [12/200], Step [400/600], d_loss: 0.2873, g_loss: 5.5326, D(x): 0.91, D(G(z)): 0.08
Epoch [12/200], Step [600/600], d_loss: 0.1984, g_loss: 4.5899, D(x): 0.97, D(G(z)): 0.12
Epoch [13/200], Step [200/600], d_loss: 0.1524, g_loss: 4.7841, D(x): 0.94, D(G(z)): 0.03
Epoch [13/200], Step [400/600], d_loss: 0.1075, g_loss: 4.5099, D(x): 0.98, D(G(z)): 0.07
Epoch [13/200], Step [600/600], d_loss: 0.5001, g_loss: 6.8877, D(x): 0.84, D(G(z)): 0.01
Epoch [14/200], Step [200/600], d_loss: 0.0878, g_loss: 4.6591, D(x): 0.98, D(G(z)): 0.06
Epoch [14/200], Step [400/600], d_loss: 0.2258, g_loss: 4.7830, D(x): 0.97, D(G(z)): 0.13
Epoch [14/200], Step [600/600], d_loss: 0.2287, g_loss: 4.9709, D(x): 0.93, D(G(z)): 0.07
Epoch [15/200], Step [200/600], d_loss: 0.2274, g_loss: 5.0505, D(x): 0.94, D(G(z)): 0.07
Epoch [15/200], Step [400/600], d_loss: 0.1501, g_loss: 5.0111, D(x): 0.94, D(G(z)): 0.04
Epoch [15/200], Step [600/600], d_loss: 0.0859, g_loss: 5.2517, D(x): 0.95, D(G(z)): 0.02
Epoch [16/200], Step [200/600], d_loss: 0.1900, g_loss: 4.7658, D(x): 0.95, D(G(z)): 0.04
Epoch [16/200], Step [400/600], d_loss: 0.1400, g_loss: 7.4018, D(x): 0.97, D(G(z)): 0.03
Epoch [16/200], Step [600/600], d_loss: 0.1485, g_loss: 5.5882, D(x): 0.99, D(G(z)): 0.10
Epoch [17/200], Step [200/600], d_loss: 0.2869, g_loss: 4.3017, D(x): 0.90, D(G(z)): 0.02
Epoch [17/200], Step [400/600], d_loss: 0.2603, g_loss: 5.7215, D(x): 0.98, D(G(z)): 0.17
Epoch [17/200], Step [600/600], d_loss: 0.1268, g_loss: 5.8928, D(x): 0.97, D(G(z)): 0.06
Epoch [18/200], Step [200/600], d_loss: 0.0614, g_loss: 6.3626, D(x): 0.97, D(G(z)): 0.02
Epoch [18/200], Step [400/600], d_loss: 0.3950, g_loss: 5.1696, D(x): 0.95, D(G(z)): 0.17
Epoch [18/200], Step [600/600], d_loss: 0.0887, g_loss: 4.8490, D(x): 0.97, D(G(z)): 0.03
Epoch [19/200], Step [200/600], d_loss: 0.1939, g_loss: 3.7379, D(x): 0.95, D(G(z)): 0.09
Epoch [19/200], Step [400/600], d_loss: 0.3316, g_loss: 5.7292, D(x): 0.88, D(G(z)): 0.01
Epoch [19/200], Step [600/600], d_loss: 0.1429, g_loss: 4.8458, D(x): 0.96, D(G(z)): 0.05
Epoch [20/200], Step [200/600], d_loss: 0.1976, g_loss: 7.0141, D(x): 0.93, D(G(z)): 0.04
Epoch [20/200], Step [400/600], d_loss: 0.1226, g_loss: 5.1962, D(x): 0.95, D(G(z)): 0.02
Epoch [20/200], Step [600/600], d_loss: 0.2435, g_loss: 7.1254, D(x): 0.92, D(G(z)): 0.01
Epoch [21/200], Step [200/600], d_loss: 0.3504, g_loss: 4.0122, D(x): 0.90, D(G(z)): 0.09
Epoch [21/200], Step [400/600], d_loss: 0.3238, g_loss: 5.0485, D(x): 0.94, D(G(z)): 0.14
Epoch [21/200], Step [600/600], d_loss: 0.2065, g_loss: 3.8730, D(x): 0.94, D(G(z)): 0.06
Epoch [22/200], Step [200/600], d_loss: 0.3684, g_loss: 4.4913, D(x): 0.87, D(G(z)): 0.02
Epoch [22/200], Step [400/600], d_loss: 0.1684, g_loss: 4.6138, D(x): 0.93, D(G(z)): 0.05
Epoch [22/200], Step [600/600], d_loss: 0.2458, g_loss: 5.2389, D(x): 0.93, D(G(z)): 0.08
Epoch [23/200], Step [200/600], d_loss: 0.3259, g_loss: 4.3412, D(x): 0.88, D(G(z)): 0.07
Epoch [23/200], Step [400/600], d_loss: 0.2862, g_loss: 4.0427, D(x): 0.94, D(G(z)): 0.10
Epoch [23/200], Step [600/600], d_loss: 0.4527, g_loss: 3.7884, D(x): 0.92, D(G(z)): 0.18
Epoch [24/200], Step [200/600], d_loss: 0.1750, g_loss: 4.4059, D(x): 0.93, D(G(z)): 0.04
Epoch [24/200], Step [400/600], d_loss: 0.1966, g_loss: 4.7848, D(x): 0.96, D(G(z)): 0.10
Epoch [24/200], Step [600/600], d_loss: 0.2126, g_loss: 3.6014, D(x): 0.96, D(G(z)): 0.11
Epoch [25/200], Step [200/600], d_loss: 0.4429, g_loss: 3.7987, D(x): 0.88, D(G(z)): 0.07
Epoch [25/200], Step [400/600], d_loss: 0.3222, g_loss: 4.3068, D(x): 0.94, D(G(z)): 0.16
Epoch [25/200], Step [600/600], d_loss: 0.2818, g_loss: 4.9270, D(x): 0.94, D(G(z)): 0.07
Epoch [26/200], Step [200/600], d_loss: 0.2719, g_loss: 5.2220, D(x): 0.95, D(G(z)): 0.14
Epoch [26/200], Step [400/600], d_loss: 0.4343, g_loss: 4.7186, D(x): 0.85, D(G(z)): 0.05
Epoch [26/200], Step [600/600], d_loss: 0.4143, g_loss: 4.4156, D(x): 0.86, D(G(z)): 0.03
Epoch [27/200], Step [200/600], d_loss: 0.3997, g_loss: 3.5575, D(x): 0.92, D(G(z)): 0.13
Epoch [27/200], Step [400/600], d_loss: 0.3701, g_loss: 4.4439, D(x): 0.87, D(G(z)): 0.02
Epoch [27/200], Step [600/600], d_loss: 0.0873, g_loss: 4.3754, D(x): 0.98, D(G(z)): 0.05
Epoch [28/200], Step [200/600], d_loss: 0.4607, g_loss: 3.8893, D(x): 0.95, D(G(z)): 0.19
Epoch [28/200], Step [400/600], d_loss: 0.3372, g_loss: 5.0975, D(x): 0.97, D(G(z)): 0.16
Epoch [28/200], Step [600/600], d_loss: 0.2284, g_loss: 5.4233, D(x): 0.92, D(G(z)): 0.06
Epoch [29/200], Step [200/600], d_loss: 0.3817, g_loss: 4.3107, D(x): 0.93, D(G(z)): 0.12
Epoch [29/200], Step [400/600], d_loss: 0.3336, g_loss: 3.7298, D(x): 0.97, D(G(z)): 0.20
Epoch [29/200], Step [600/600], d_loss: 0.2664, g_loss: 4.5941, D(x): 0.92, D(G(z)): 0.08
Epoch [30/200], Step [200/600], d_loss: 0.4520, g_loss: 3.2173, D(x): 0.87, D(G(z)): 0.16
Epoch [30/200], Step [400/600], d_loss: 0.4444, g_loss: 2.4830, D(x): 0.91, D(G(z)): 0.21
Epoch [30/200], Step [600/600], d_loss: 0.2679, g_loss: 4.0562, D(x): 0.92, D(G(z)): 0.08
Epoch [31/200], Step [200/600], d_loss: 0.4414, g_loss: 4.9419, D(x): 0.93, D(G(z)): 0.20
Epoch [31/200], Step [400/600], d_loss: 0.4257, g_loss: 4.2394, D(x): 0.90, D(G(z)): 0.16
Epoch [31/200], Step [600/600], d_loss: 0.1577, g_loss: 4.1475, D(x): 0.93, D(G(z)): 0.05
Epoch [32/200], Step [200/600], d_loss: 0.6468, g_loss: 3.1813, D(x): 0.79, D(G(z)): 0.09
Epoch [32/200], Step [400/600], d_loss: 0.7241, g_loss: 4.1790, D(x): 0.78, D(G(z)): 0.12
Epoch [32/200], Step [600/600], d_loss: 0.4787, g_loss: 3.5841, D(x): 0.87, D(G(z)): 0.14
Epoch [33/200], Step [200/600], d_loss: 0.2906, g_loss: 3.6536, D(x): 0.94, D(G(z)): 0.15
Epoch [33/200], Step [400/600], d_loss: 0.4640, g_loss: 3.3191, D(x): 0.93, D(G(z)): 0.18
Epoch [33/200], Step [600/600], d_loss: 0.6650, g_loss: 3.3779, D(x): 0.83, D(G(z)): 0.20
Epoch [34/200], Step [200/600], d_loss: 0.3749, g_loss: 3.5957, D(x): 0.90, D(G(z)): 0.14
Epoch [34/200], Step [400/600], d_loss: 0.3069, g_loss: 4.4897, D(x): 0.92, D(G(z)): 0.13
Epoch [34/200], Step [600/600], d_loss: 0.4221, g_loss: 3.1977, D(x): 0.86, D(G(z)): 0.11
Epoch [35/200], Step [200/600], d_loss: 0.5369, g_loss: 2.4619, D(x): 0.80, D(G(z)): 0.11
Epoch [35/200], Step [400/600], d_loss: 0.5826, g_loss: 2.3311, D(x): 0.94, D(G(z)): 0.29
Epoch [35/200], Step [600/600], d_loss: 0.4576, g_loss: 3.8068, D(x): 0.83, D(G(z)): 0.06
Epoch [36/200], Step [200/600], d_loss: 0.2154, g_loss: 3.4432, D(x): 0.93, D(G(z)): 0.08
Epoch [36/200], Step [400/600], d_loss: 0.4272, g_loss: 3.7969, D(x): 0.91, D(G(z)): 0.17
Epoch [36/200], Step [600/600], d_loss: 0.4357, g_loss: 4.6301, D(x): 0.86, D(G(z)): 0.06
Epoch [37/200], Step [200/600], d_loss: 0.3840, g_loss: 3.5209, D(x): 0.86, D(G(z)): 0.09
Epoch [37/200], Step [400/600], d_loss: 0.2520, g_loss: 4.5026, D(x): 0.93, D(G(z)): 0.09
Epoch [37/200], Step [600/600], d_loss: 0.2975, g_loss: 4.3481, D(x): 0.96, D(G(z)): 0.12
Epoch [38/200], Step [200/600], d_loss: 0.2116, g_loss: 4.2693, D(x): 0.96, D(G(z)): 0.12
Epoch [38/200], Step [400/600], d_loss: 0.2980, g_loss: 5.6267, D(x): 0.91, D(G(z)): 0.08
Epoch [38/200], Step [600/600], d_loss: 0.3867, g_loss: 3.2204, D(x): 0.85, D(G(z)): 0.09
Epoch [39/200], Step [200/600], d_loss: 0.5077, g_loss: 2.9204, D(x): 0.94, D(G(z)): 0.25
Epoch [39/200], Step [400/600], d_loss: 0.5755, g_loss: 3.8340, D(x): 0.82, D(G(z)): 0.07
Epoch [39/200], Step [600/600], d_loss: 0.3072, g_loss: 4.0534, D(x): 0.88, D(G(z)): 0.09
Epoch [40/200], Step [200/600], d_loss: 0.2913, g_loss: 3.2628, D(x): 0.94, D(G(z)): 0.11
Epoch [40/200], Step [400/600], d_loss: 0.5462, g_loss: 3.0916, D(x): 0.82, D(G(z)): 0.12
Epoch [40/200], Step [600/600], d_loss: 0.6271, g_loss: 3.8926, D(x): 0.80, D(G(z)): 0.12
Epoch [41/200], Step [200/600], d_loss: 0.4911, g_loss: 3.1461, D(x): 0.90, D(G(z)): 0.19
Epoch [41/200], Step [400/600], d_loss: 0.4260, g_loss: 3.4678, D(x): 0.87, D(G(z)): 0.11
Epoch [41/200], Step [600/600], d_loss: 0.6902, g_loss: 2.5126, D(x): 0.76, D(G(z)): 0.15
Epoch [42/200], Step [200/600], d_loss: 0.4464, g_loss: 3.1702, D(x): 0.83, D(G(z)): 0.12
Epoch [42/200], Step [400/600], d_loss: 0.3756, g_loss: 2.6393, D(x): 0.89, D(G(z)): 0.12
Epoch [42/200], Step [600/600], d_loss: 0.6360, g_loss: 3.0586, D(x): 0.79, D(G(z)): 0.18
Epoch [43/200], Step [200/600], d_loss: 0.4233, g_loss: 3.2700, D(x): 0.86, D(G(z)): 0.12
Epoch [43/200], Step [400/600], d_loss: 0.6836, g_loss: 3.4518, D(x): 0.80, D(G(z)): 0.21
Epoch [43/200], Step [600/600], d_loss: 0.7702, g_loss: 2.3113, D(x): 0.76, D(G(z)): 0.19
Epoch [44/200], Step [200/600], d_loss: 0.3849, g_loss: 2.8337, D(x): 0.87, D(G(z)): 0.11
Epoch [44/200], Step [400/600], d_loss: 0.5281, g_loss: 2.4929, D(x): 0.87, D(G(z)): 0.21
Epoch [44/200], Step [600/600], d_loss: 0.7222, g_loss: 2.9932, D(x): 0.85, D(G(z)): 0.27
Epoch [45/200], Step [200/600], d_loss: 0.5053, g_loss: 3.4436, D(x): 0.85, D(G(z)): 0.15
Epoch [45/200], Step [400/600], d_loss: 0.5699, g_loss: 3.0064, D(x): 0.80, D(G(z)): 0.15
Epoch [45/200], Step [600/600], d_loss: 0.5866, g_loss: 2.7113, D(x): 0.83, D(G(z)): 0.19
Epoch [46/200], Step [200/600], d_loss: 0.5891, g_loss: 2.5745, D(x): 0.81, D(G(z)): 0.14
Epoch [46/200], Step [400/600], d_loss: 0.4424, g_loss: 2.5302, D(x): 0.82, D(G(z)): 0.10
Epoch [46/200], Step [600/600], d_loss: 0.6589, g_loss: 2.2096, D(x): 0.76, D(G(z)): 0.16
Epoch [47/200], Step [200/600], d_loss: 0.5520, g_loss: 2.6966, D(x): 0.85, D(G(z)): 0.21
Epoch [47/200], Step [400/600], d_loss: 0.7059, g_loss: 2.7294, D(x): 0.81, D(G(z)): 0.22
Epoch [47/200], Step [600/600], d_loss: 0.2912, g_loss: 3.7787, D(x): 0.88, D(G(z)): 0.08
Epoch [48/200], Step [200/600], d_loss: 0.4149, g_loss: 2.3708, D(x): 0.90, D(G(z)): 0.19
Epoch [48/200], Step [400/600], d_loss: 0.4266, g_loss: 3.3905, D(x): 0.87, D(G(z)): 0.13
Epoch [48/200], Step [600/600], d_loss: 0.3298, g_loss: 3.2459, D(x): 0.90, D(G(z)): 0.11
Epoch [49/200], Step [200/600], d_loss: 0.3318, g_loss: 3.5093, D(x): 0.92, D(G(z)): 0.14
Epoch [49/200], Step [400/600], d_loss: 0.6507, g_loss: 3.4914, D(x): 0.78, D(G(z)): 0.11
Epoch [49/200], Step [600/600], d_loss: 0.6534, g_loss: 2.7849, D(x): 0.76, D(G(z)): 0.08
Epoch [50/200], Step [200/600], d_loss: 0.6406, g_loss: 3.6847, D(x): 0.82, D(G(z)): 0.20
Epoch [50/200], Step [400/600], d_loss: 0.6941, g_loss: 2.9424, D(x): 0.80, D(G(z)): 0.21
Epoch [50/200], Step [600/600], d_loss: 0.4733, g_loss: 3.2116, D(x): 0.86, D(G(z)): 0.15
Epoch [51/200], Step [200/600], d_loss: 0.3287, g_loss: 3.6701, D(x): 0.91, D(G(z)): 0.16
Epoch [51/200], Step [400/600], d_loss: 0.5537, g_loss: 1.9568, D(x): 0.79, D(G(z)): 0.11
Epoch [51/200], Step [600/600], d_loss: 0.6470, g_loss: 2.4228, D(x): 0.80, D(G(z)): 0.22
Epoch [52/200], Step [200/600], d_loss: 0.8183, g_loss: 3.1269, D(x): 0.73, D(G(z)): 0.13
Epoch [52/200], Step [400/600], d_loss: 0.4611, g_loss: 2.3535, D(x): 0.88, D(G(z)): 0.20
Epoch [52/200], Step [600/600], d_loss: 0.5158, g_loss: 2.3248, D(x): 0.85, D(G(z)): 0.20
Epoch [53/200], Step [200/600], d_loss: 0.5410, g_loss: 2.3491, D(x): 0.82, D(G(z)): 0.17
Epoch [53/200], Step [400/600], d_loss: 0.6135, g_loss: 2.0376, D(x): 0.79, D(G(z)): 0.19
Epoch [53/200], Step [600/600], d_loss: 0.6995, g_loss: 3.0768, D(x): 0.76, D(G(z)): 0.15
Epoch [54/200], Step [200/600], d_loss: 0.7788, g_loss: 2.3476, D(x): 0.84, D(G(z)): 0.32
Epoch [54/200], Step [400/600], d_loss: 0.6057, g_loss: 2.6538, D(x): 0.78, D(G(z)): 0.16
Epoch [54/200], Step [600/600], d_loss: 0.6524, g_loss: 1.9913, D(x): 0.83, D(G(z)): 0.26
Epoch [55/200], Step [200/600], d_loss: 0.4699, g_loss: 3.0903, D(x): 0.88, D(G(z)): 0.20
Epoch [55/200], Step [400/600], d_loss: 0.6438, g_loss: 2.4830, D(x): 0.77, D(G(z)): 0.18
Epoch [55/200], Step [600/600], d_loss: 0.8641, g_loss: 1.5713, D(x): 0.80, D(G(z)): 0.32
Epoch [56/200], Step [200/600], d_loss: 0.5685, g_loss: 1.8569, D(x): 0.78, D(G(z)): 0.17
Epoch [56/200], Step [400/600], d_loss: 0.4462, g_loss: 2.5452, D(x): 0.82, D(G(z)): 0.13
Epoch [56/200], Step [600/600], d_loss: 0.5907, g_loss: 2.4652, D(x): 0.75, D(G(z)): 0.12
Epoch [57/200], Step [200/600], d_loss: 0.4843, g_loss: 3.0264, D(x): 0.83, D(G(z)): 0.16
Epoch [57/200], Step [400/600], d_loss: 0.5594, g_loss: 2.5355, D(x): 0.82, D(G(z)): 0.20
Epoch [57/200], Step [600/600], d_loss: 0.5502, g_loss: 2.2562, D(x): 0.80, D(G(z)): 0.15
Epoch [58/200], Step [200/600], d_loss: 0.6652, g_loss: 2.0351, D(x): 0.82, D(G(z)): 0.24
Epoch [58/200], Step [400/600], d_loss: 0.4033, g_loss: 2.9153, D(x): 0.90, D(G(z)): 0.20
Epoch [58/200], Step [600/600], d_loss: 0.6348, g_loss: 2.1119, D(x): 0.88, D(G(z)): 0.30
Epoch [59/200], Step [200/600], d_loss: 0.6415, g_loss: 3.1282, D(x): 0.81, D(G(z)): 0.20
Epoch [59/200], Step [400/600], d_loss: 0.5136, g_loss: 3.3376, D(x): 0.82, D(G(z)): 0.17
Epoch [59/200], Step [600/600], d_loss: 0.6462, g_loss: 2.3404, D(x): 0.76, D(G(z)): 0.16
Epoch [60/200], Step [200/600], d_loss: 0.5160, g_loss: 2.9145, D(x): 0.80, D(G(z)): 0.14
Epoch [60/200], Step [400/600], d_loss: 0.7120, g_loss: 2.6986, D(x): 0.79, D(G(z)): 0.27
Epoch [60/200], Step [600/600], d_loss: 0.4580, g_loss: 2.8799, D(x): 0.88, D(G(z)): 0.17
Epoch [61/200], Step [200/600], d_loss: 0.5593, g_loss: 2.7334, D(x): 0.77, D(G(z)): 0.13
Epoch [61/200], Step [400/600], d_loss: 0.7277, g_loss: 2.6792, D(x): 0.90, D(G(z)): 0.30
Epoch [61/200], Step [600/600], d_loss: 0.7283, g_loss: 1.6875, D(x): 0.68, D(G(z)): 0.11
Epoch [62/200], Step [200/600], d_loss: 0.3957, g_loss: 3.4459, D(x): 0.89, D(G(z)): 0.18
Epoch [62/200], Step [400/600], d_loss: 0.5846, g_loss: 1.6489, D(x): 0.81, D(G(z)): 0.17
Epoch [62/200], Step [600/600], d_loss: 0.7358, g_loss: 2.3752, D(x): 0.82, D(G(z)): 0.28
Epoch [63/200], Step [200/600], d_loss: 0.5577, g_loss: 2.9186, D(x): 0.87, D(G(z)): 0.21
Epoch [63/200], Step [400/600], d_loss: 0.6466, g_loss: 2.4374, D(x): 0.85, D(G(z)): 0.27
Epoch [63/200], Step [600/600], d_loss: 0.6891, g_loss: 2.2632, D(x): 0.84, D(G(z)): 0.30
Epoch [64/200], Step [200/600], d_loss: 0.8519, g_loss: 2.3878, D(x): 0.65, D(G(z)): 0.12
Epoch [64/200], Step [400/600], d_loss: 0.6176, g_loss: 2.4059, D(x): 0.81, D(G(z)): 0.20
Epoch [64/200], Step [600/600], d_loss: 0.8032, g_loss: 3.1006, D(x): 0.68, D(G(z)): 0.12
Epoch [65/200], Step [200/600], d_loss: 0.5564, g_loss: 2.9973, D(x): 0.77, D(G(z)): 0.14
Epoch [65/200], Step [400/600], d_loss: 0.7254, g_loss: 2.0362, D(x): 0.81, D(G(z)): 0.24
Epoch [65/200], Step [600/600], d_loss: 0.5845, g_loss: 2.7048, D(x): 0.75, D(G(z)): 0.11
Epoch [66/200], Step [200/600], d_loss: 0.6310, g_loss: 2.9230, D(x): 0.79, D(G(z)): 0.20
Epoch [66/200], Step [400/600], d_loss: 0.5095, g_loss: 2.3467, D(x): 0.77, D(G(z)): 0.13
Epoch [66/200], Step [600/600], d_loss: 0.5869, g_loss: 2.3648, D(x): 0.74, D(G(z)): 0.13
Epoch [67/200], Step [200/600], d_loss: 0.7529, g_loss: 2.2525, D(x): 0.72, D(G(z)): 0.15
Epoch [67/200], Step [400/600], d_loss: 0.6669, g_loss: 2.1734, D(x): 0.76, D(G(z)): 0.19
Epoch [67/200], Step [600/600], d_loss: 0.6977, g_loss: 2.0799, D(x): 0.77, D(G(z)): 0.20
Epoch [68/200], Step [200/600], d_loss: 0.5344, g_loss: 2.2810, D(x): 0.85, D(G(z)): 0.21
Epoch [68/200], Step [400/600], d_loss: 0.7502, g_loss: 2.7689, D(x): 0.73, D(G(z)): 0.21
Epoch [68/200], Step [600/600], d_loss: 0.8459, g_loss: 2.0199, D(x): 0.84, D(G(z)): 0.36
Epoch [69/200], Step [200/600], d_loss: 0.5745, g_loss: 2.1853, D(x): 0.78, D(G(z)): 0.18
Epoch [69/200], Step [400/600], d_loss: 0.7515, g_loss: 2.3225, D(x): 0.77, D(G(z)): 0.25
Epoch [69/200], Step [600/600], d_loss: 0.7967, g_loss: 1.8125, D(x): 0.74, D(G(z)): 0.22
Epoch [70/200], Step [200/600], d_loss: 0.7456, g_loss: 2.2358, D(x): 0.81, D(G(z)): 0.27
Epoch [70/200], Step [400/600], d_loss: 0.6154, g_loss: 2.4855, D(x): 0.81, D(G(z)): 0.24
Epoch [70/200], Step [600/600], d_loss: 0.5187, g_loss: 2.1255, D(x): 0.85, D(G(z)): 0.22
Epoch [71/200], Step [200/600], d_loss: 0.5937, g_loss: 3.1329, D(x): 0.80, D(G(z)): 0.19
Epoch [71/200], Step [400/600], d_loss: 0.5692, g_loss: 2.5442, D(x): 0.79, D(G(z)): 0.19
Epoch [71/200], Step [600/600], d_loss: 0.4669, g_loss: 2.8289, D(x): 0.84, D(G(z)): 0.18
Epoch [72/200], Step [200/600], d_loss: 0.5994, g_loss: 2.6761, D(x): 0.82, D(G(z)): 0.20
Epoch [72/200], Step [400/600], d_loss: 0.4832, g_loss: 2.6731, D(x): 0.83, D(G(z)): 0.17
Epoch [72/200], Step [600/600], d_loss: 0.5769, g_loss: 2.7867, D(x): 0.78, D(G(z)): 0.17
Epoch [73/200], Step [200/600], d_loss: 0.6073, g_loss: 2.0403, D(x): 0.79, D(G(z)): 0.17
Epoch [73/200], Step [400/600], d_loss: 0.7357, g_loss: 2.4262, D(x): 0.74, D(G(z)): 0.20
Epoch [73/200], Step [600/600], d_loss: 0.5897, g_loss: 2.4739, D(x): 0.82, D(G(z)): 0.21
Epoch [74/200], Step [200/600], d_loss: 0.6338, g_loss: 2.2802, D(x): 0.80, D(G(z)): 0.23
Epoch [74/200], Step [400/600], d_loss: 0.5724, g_loss: 2.7530, D(x): 0.84, D(G(z)): 0.23
Epoch [74/200], Step [600/600], d_loss: 0.6505, g_loss: 2.5929, D(x): 0.78, D(G(z)): 0.18
Epoch [75/200], Step [200/600], d_loss: 0.6580, g_loss: 2.5763, D(x): 0.85, D(G(z)): 0.28
Epoch [75/200], Step [400/600], d_loss: 0.6089, g_loss: 2.0517, D(x): 0.81, D(G(z)): 0.22
Epoch [75/200], Step [600/600], d_loss: 0.6480, g_loss: 2.3134, D(x): 0.83, D(G(z)): 0.27
Epoch [76/200], Step [200/600], d_loss: 0.8040, g_loss: 2.1533, D(x): 0.77, D(G(z)): 0.28
Epoch [76/200], Step [400/600], d_loss: 0.7031, g_loss: 2.7626, D(x): 0.79, D(G(z)): 0.24
Epoch [76/200], Step [600/600], d_loss: 0.7798, g_loss: 1.9259, D(x): 0.77, D(G(z)): 0.26
Epoch [77/200], Step [200/600], d_loss: 0.6174, g_loss: 2.2541, D(x): 0.76, D(G(z)): 0.16
Epoch [77/200], Step [400/600], d_loss: 0.7185, g_loss: 1.5864, D(x): 0.78, D(G(z)): 0.22
Epoch [77/200], Step [600/600], d_loss: 0.6941, g_loss: 2.3483, D(x): 0.82, D(G(z)): 0.27
Epoch [78/200], Step [200/600], d_loss: 0.8584, g_loss: 2.3806, D(x): 0.72, D(G(z)): 0.22
Epoch [78/200], Step [400/600], d_loss: 0.6060, g_loss: 1.8562, D(x): 0.83, D(G(z)): 0.24
Epoch [78/200], Step [600/600], d_loss: 0.7914, g_loss: 2.5783, D(x): 0.82, D(G(z)): 0.32
Epoch [79/200], Step [200/600], d_loss: 0.7219, g_loss: 2.3257, D(x): 0.73, D(G(z)): 0.20
Epoch [79/200], Step [400/600], d_loss: 0.7538, g_loss: 2.2944, D(x): 0.78, D(G(z)): 0.27
Epoch [79/200], Step [600/600], d_loss: 0.6531, g_loss: 2.0533, D(x): 0.80, D(G(z)): 0.24
Epoch [80/200], Step [200/600], d_loss: 0.9207, g_loss: 1.8896, D(x): 0.64, D(G(z)): 0.16
Epoch [80/200], Step [400/600], d_loss: 0.7419, g_loss: 2.2362, D(x): 0.69, D(G(z)): 0.17
Epoch [80/200], Step [600/600], d_loss: 0.5812, g_loss: 2.3372, D(x): 0.76, D(G(z)): 0.14
Epoch [81/200], Step [200/600], d_loss: 0.5252, g_loss: 2.2365, D(x): 0.78, D(G(z)): 0.17
Epoch [81/200], Step [400/600], d_loss: 0.7609, g_loss: 2.1495, D(x): 0.75, D(G(z)): 0.26
Epoch [81/200], Step [600/600], d_loss: 0.7870, g_loss: 2.3520, D(x): 0.75, D(G(z)): 0.26
Epoch [82/200], Step [200/600], d_loss: 0.7311, g_loss: 2.2137, D(x): 0.78, D(G(z)): 0.27
Epoch [82/200], Step [400/600], d_loss: 0.6972, g_loss: 1.7540, D(x): 0.77, D(G(z)): 0.26
Epoch [82/200], Step [600/600], d_loss: 0.8349, g_loss: 1.7994, D(x): 0.84, D(G(z)): 0.34
Epoch [83/200], Step [200/600], d_loss: 0.8138, g_loss: 2.1877, D(x): 0.80, D(G(z)): 0.32
Epoch [83/200], Step [400/600], d_loss: 0.7913, g_loss: 2.0897, D(x): 0.72, D(G(z)): 0.23
Epoch [83/200], Step [600/600], d_loss: 0.9098, g_loss: 1.5447, D(x): 0.66, D(G(z)): 0.23
Epoch [84/200], Step [200/600], d_loss: 0.8892, g_loss: 1.7672, D(x): 0.73, D(G(z)): 0.30
Epoch [84/200], Step [400/600], d_loss: 0.5531, g_loss: 2.2540, D(x): 0.80, D(G(z)): 0.21
Epoch [84/200], Step [600/600], d_loss: 0.8780, g_loss: 2.0353, D(x): 0.79, D(G(z)): 0.35
Epoch [85/200], Step [200/600], d_loss: 0.6664, g_loss: 2.4569, D(x): 0.83, D(G(z)): 0.25
Epoch [85/200], Step [400/600], d_loss: 0.9369, g_loss: 1.9261, D(x): 0.72, D(G(z)): 0.30
Epoch [85/200], Step [600/600], d_loss: 0.8626, g_loss: 1.3774, D(x): 0.74, D(G(z)): 0.30
Epoch [86/200], Step [200/600], d_loss: 0.8138, g_loss: 2.2834, D(x): 0.72, D(G(z)): 0.24
Epoch [86/200], Step [400/600], d_loss: 0.9225, g_loss: 2.0189, D(x): 0.70, D(G(z)): 0.29
Epoch [86/200], Step [600/600], d_loss: 0.7091, g_loss: 2.5431, D(x): 0.75, D(G(z)): 0.22
Epoch [87/200], Step [200/600], d_loss: 0.7513, g_loss: 1.6684, D(x): 0.72, D(G(z)): 0.23
Epoch [87/200], Step [400/600], d_loss: 0.7172, g_loss: 1.7539, D(x): 0.82, D(G(z)): 0.30
Epoch [87/200], Step [600/600], d_loss: 1.0858, g_loss: 2.5704, D(x): 0.64, D(G(z)): 0.24
Epoch [88/200], Step [200/600], d_loss: 0.8175, g_loss: 2.5280, D(x): 0.74, D(G(z)): 0.25
Epoch [88/200], Step [400/600], d_loss: 0.7610, g_loss: 1.8840, D(x): 0.78, D(G(z)): 0.28
Epoch [88/200], Step [600/600], d_loss: 0.7132, g_loss: 2.2979, D(x): 0.72, D(G(z)): 0.18
Epoch [89/200], Step [200/600], d_loss: 0.8786, g_loss: 1.6000, D(x): 0.72, D(G(z)): 0.25
Epoch [89/200], Step [400/600], d_loss: 0.7933, g_loss: 1.9502, D(x): 0.71, D(G(z)): 0.23
Epoch [89/200], Step [600/600], d_loss: 0.5541, g_loss: 2.7170, D(x): 0.81, D(G(z)): 0.19
Epoch [90/200], Step [200/600], d_loss: 0.9670, g_loss: 1.5065, D(x): 0.64, D(G(z)): 0.23
Epoch [90/200], Step [400/600], d_loss: 0.8945, g_loss: 1.5057, D(x): 0.74, D(G(z)): 0.32
Epoch [90/200], Step [600/600], d_loss: 0.8275, g_loss: 1.9291, D(x): 0.66, D(G(z)): 0.20
Epoch [91/200], Step [200/600], d_loss: 0.7411, g_loss: 2.1024, D(x): 0.75, D(G(z)): 0.24
Epoch [91/200], Step [400/600], d_loss: 0.8613, g_loss: 1.7976, D(x): 0.72, D(G(z)): 0.27
Epoch [91/200], Step [600/600], d_loss: 0.9204, g_loss: 1.9358, D(x): 0.81, D(G(z)): 0.38
Epoch [92/200], Step [200/600], d_loss: 0.5769, g_loss: 1.8164, D(x): 0.85, D(G(z)): 0.26
Epoch [92/200], Step [400/600], d_loss: 0.8222, g_loss: 1.5064, D(x): 0.73, D(G(z)): 0.26
Epoch [92/200], Step [600/600], d_loss: 0.5844, g_loss: 2.5825, D(x): 0.78, D(G(z)): 0.20
Epoch [93/200], Step [200/600], d_loss: 0.6836, g_loss: 1.9087, D(x): 0.74, D(G(z)): 0.22
Epoch [93/200], Step [400/600], d_loss: 0.7328, g_loss: 1.7869, D(x): 0.72, D(G(z)): 0.22
Epoch [93/200], Step [600/600], d_loss: 0.7112, g_loss: 1.6405, D(x): 0.79, D(G(z)): 0.28
Epoch [94/200], Step [200/600], d_loss: 0.7915, g_loss: 1.7422, D(x): 0.71, D(G(z)): 0.24
Epoch [94/200], Step [400/600], d_loss: 0.7935, g_loss: 1.4080, D(x): 0.85, D(G(z)): 0.36
Epoch [94/200], Step [600/600], d_loss: 1.0396, g_loss: 1.4284, D(x): 0.76, D(G(z)): 0.35
Epoch [95/200], Step [200/600], d_loss: 0.7436, g_loss: 2.2755, D(x): 0.73, D(G(z)): 0.18
Epoch [95/200], Step [400/600], d_loss: 0.8688, g_loss: 1.8874, D(x): 0.76, D(G(z)): 0.30
Epoch [95/200], Step [600/600], d_loss: 0.8444, g_loss: 2.3308, D(x): 0.66, D(G(z)): 0.19
Epoch [96/200], Step [200/600], d_loss: 0.7442, g_loss: 2.3771, D(x): 0.73, D(G(z)): 0.22
Epoch [96/200], Step [400/600], d_loss: 0.7074, g_loss: 2.2321, D(x): 0.74, D(G(z)): 0.18
Epoch [96/200], Step [600/600], d_loss: 0.8868, g_loss: 1.4519, D(x): 0.67, D(G(z)): 0.21
Epoch [97/200], Step [200/600], d_loss: 0.8345, g_loss: 1.6682, D(x): 0.72, D(G(z)): 0.28
Epoch [97/200], Step [400/600], d_loss: 0.7934, g_loss: 2.1283, D(x): 0.68, D(G(z)): 0.17
Epoch [97/200], Step [600/600], d_loss: 0.7260, g_loss: 1.4867, D(x): 0.76, D(G(z)): 0.25
Epoch [98/200], Step [200/600], d_loss: 0.7596, g_loss: 2.1651, D(x): 0.76, D(G(z)): 0.24
Epoch [98/200], Step [400/600], d_loss: 0.8643, g_loss: 1.9000, D(x): 0.69, D(G(z)): 0.24
Epoch [98/200], Step [600/600], d_loss: 0.7319, g_loss: 1.6552, D(x): 0.75, D(G(z)): 0.25
Epoch [99/200], Step [200/600], d_loss: 0.7711, g_loss: 1.7455, D(x): 0.69, D(G(z)): 0.20
Epoch [99/200], Step [400/600], d_loss: 0.6971, g_loss: 2.1426, D(x): 0.80, D(G(z)): 0.25
Epoch [99/200], Step [600/600], d_loss: 0.8339, g_loss: 2.5612, D(x): 0.68, D(G(z)): 0.23
Epoch [100/200], Step [200/600], d_loss: 0.9574, g_loss: 1.6630, D(x): 0.65, D(G(z)): 0.26
Epoch [100/200], Step [400/600], d_loss: 0.9069, g_loss: 1.6404, D(x): 0.65, D(G(z)): 0.23
Epoch [100/200], Step [600/600], d_loss: 0.7358, g_loss: 1.6987, D(x): 0.74, D(G(z)): 0.24
Epoch [101/200], Step [200/600], d_loss: 0.8096, g_loss: 1.5011, D(x): 0.73, D(G(z)): 0.26
Epoch [101/200], Step [400/600], d_loss: 0.9509, g_loss: 1.5224, D(x): 0.75, D(G(z)): 0.32
Epoch [101/200], Step [600/600], d_loss: 1.0163, g_loss: 1.7723, D(x): 0.76, D(G(z)): 0.41
Epoch [102/200], Step [200/600], d_loss: 0.9929, g_loss: 1.3493, D(x): 0.73, D(G(z)): 0.37
Epoch [102/200], Step [400/600], d_loss: 0.8234, g_loss: 2.2110, D(x): 0.67, D(G(z)): 0.19
Epoch [102/200], Step [600/600], d_loss: 0.6964, g_loss: 2.1019, D(x): 0.74, D(G(z)): 0.21

对分别对生成器和判别器的模型进行保存

# Save the model checkpoints
torch.save(G.state_dict(), 'G.ckpt')
torch.save(D.state_dict(), 'D.ckpt')

训练结果的展示

一开提供的minist的真实数据的图片
在这里插入图片描述
通过迭代1次过后生成器生成的图片
在这里插入图片描述

通过迭代10次过后生成器生成的图片
在这里插入图片描述

通过迭代50次过后生成器生成的图片
在这里插入图片描述

通过迭代103次过后生成器生成的图片
在这里插入图片描述
我们可以看到,随着迭代次数的增加,我们的生产起生成的图像逐渐的与minist数据的提供的数据图像相类似。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/81091.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

Kubernetes 安全机制 认证 授权 准入控制

客户端应用若想发送请求到 apiserver 操作管理K8S资源对象,需要先通过三关安全验证 认证(Authentication)鉴权(Authorization)准入控制(Admission Control) Kubernetes 作为一个分布式集群的管理…

C++11并发与多线程笔记(9) async、future、packaged_task、promise

C11并发与多线程笔记(9) async、future、packaged_task、promise 1、std::async、std::future创建后台任务并返回值2、std::packaged_task:打包任务,把任务包装起来3、std::promise3、小结 1、std::async、std::future创建后台任务…

编程练习(3)

一.选择题 第一题: 函数传参的两个变量都是传的地址,而数组名c本身就是地址,int型变量b需要使用&符号,因此答案为A 第二题: 本题考察const修饰指针变量,答案为A,B,C,D 第三题: 注意int 型变…

Linux忘记root密码解决方法

当我们忘记root密码进不去服务器怎么办?不要担心,可以进入到linux的救援模式修改root密码。 下面直接上干货,流程如下: 1.重启电脑,按上下键滑动,保证不进入开机流程,然后按e键 2.出现此页面…

Mac 开发 Tang Nano FPGA 指南(使用终端和使用 VS Code 和插件,适用所有 Gowin FPGA)

最近收到了一个 Tang nano 9K FPGA开发板,就想借此机会研究一下。 官方文档里介绍如果想使用高云的 FPGA,就需要使用 GOWIN IDE,但是需要申请 license 提交一堆资料,我是别人送的就不太方便让别人弄。加上 IDE 其实并不是很适合学…

2023-08-19 LeetCode每日一题(两整数相加)

2023-08-19每日一题 一、题目编号 2235. 两整数相加二、题目链接 点击跳转到题目位置 三、题目描述 给你两个整数 num1 和 num2&#xff0c;返回这两个整数的和。 示例 1&#xff1a; 示例 2&#xff1a; 提示&#xff1a; -100 < num1, num2 < 100 四、解题代…

idea2023 springboot2.7.5+mybatisplus3.5.2+jsp 初学单表增删改查

创建项目 修改pom.xml 为2.7.5 引入mybatisplus 2.1 修改pom.xml <dependency><groupId>com.baomidou</groupId><artifactId>mybatis-plus-boot-starter</artifactId><version>3.5.2</version></dependency><!--mysq…

Vue elementui 实现表格selection的默认勾选,翻页记录勾选状态

需求&#xff1a;当弹出一个列表页数据&#xff0c;对其进行筛选选择。 列表更新&#xff0c;填充已选数据 主要使用toggleRowSelection 代码如下&#xff1a; <el-table v-loading"loading" :data"drugList" selection-change"handleSelection…

华为手机Outlook手机APP无法登录邮箱,提示[2002]错误代码

近期遇到不少华为手机的Outlook APP无法登录邮箱Office365邮箱的案例&#xff0c;并且提示&#xff1a; 错误 出错了。[2002] 经测试&#xff0c;这应该是华为应用市场下载的Outlook版本有问题。 解决方法&#xff1a; 把Outlook卸载之后从微软官网重新下载官网版本去安装&am…

通过 kk 创建 k8s 集群和 kubesphere

官方文档&#xff1a;多节点安装 确保从正确的区域下载 KubeKey export KKZONEcn下载 KubeKey curl -sfL https://get-kk.kubesphere.io | VERSIONv3.0.7 sh -为 kk 添加可执行权限&#xff1a; chmod x kk创建 config 文件 KubeSphere 版本&#xff1a;v3.3 支持的 Kuber…

2018年3月全国计算机等级考试真题(语言二级C)

2018年3月全国计算机等级考试真题&#xff08;语言二级C&#xff09; 第1题 设有定义&#xff1a;char s[81]&#xff1b;int i0&#xff1b;以下不能将一行带有空格的字符串正确读入的语句或语句组是 A. while((s[i]getchar())!\n);s[i]\0; B. scanf("%s",s); C.…

Centos7完全卸载已安装的Nginx

docker实战(一):centos7 yum安装docker docker实战(二):基础命令篇 docker实战(三):docker网络模式(超详细) docker实战(四):docker架构原理 docker实战(五):docker镜像及仓库配置 docker实战(六):docker 网络及数据卷设置 docker实战(七):docker 性质及版本选择 认知升…

Unsafe Filedownload

文件下载功能在很多web系统上都会出现&#xff0c;一般我们当点击下载链接&#xff0c;便会向后台发送一个下载请求&#xff0c;一般这个请求会包含一个需要下载的文件名称&#xff0c;后台在收到请求后会开始执行下载代码&#xff0c;将该文件名对应的文件response给浏览器&am…

Python土力学与基础工程计算.PDF-螺旋板载荷试验

python 求解代码如下&#xff1a; 1. import numpy as np 2. 3. # 已知参数 4. p_a 100 # 标准压力&#xff0c; kPa 5. p np.array([25, 50, 100, 200) # 荷载&#xff0c; kPa 6. s np.array([2.88, 5.28, 9.50, 15.00) / 10 # 沉降量&#xff0c; cm 7. D 10 # 螺旋板直…

云安全攻防(十二)之 手动搭建 K8S 环境搭建

手动搭建 K8S 环境搭建 首先前期我们准备好三台 Centos7 机器&#xff0c;配置如下&#xff1a; 主机名IP系统版本k8s-master192.168.41.141Centos7k8s-node1192.168.41.142Centos7k8s-node2192.168.41.143Centos7 前期准备 首先在三台机器上都执行如下的命令 # 关闭防火墙…

Linux学习之基本指令二

-----紧接上文 在了解cat指令之前&#xff0c;我们首先要了解到Linux下一切皆文件&#xff0c;在学习c语言时我们就已经了解到了 对文件输入以及读入的操作&#xff08;向显示器打印&#xff0c;从键盘读取数据&#xff09;&#xff0c;对于Linux下文件的操作&#xff0c;也是…

houdini xyzdist primuv 实现按路径走

2. meause distance v 0; add popforce

关于Linux Docker springboot jar 日志时间不正确 问题解决

使用Springboot项目的jar&#xff0c;制作了一个Docker镜像&#xff0c;启动该镜像后发现容器和容器中的Springboot 项目的日志时间不正确。 解决 查看容器时间命令为&#xff1a; docker exec 容器id date 1. 容器与宿主机同步时间 在启动镜像时候把操作系统的时间通过&q…

Mac OS下应用Python+Selenium实现web自动化测试

在Mac环境下应用PythonSelenium实现web自动化测试 在这个过程中要注意两点&#xff1a; 1.在终端联网执行命令“sudo pip install –U selenium”如果失败了的话&#xff0c;可以尝试用命令“sudo easy_install selenium”来安装selenium; 2.安装好PyCharm后新建project&…

QT:自定义控件(Connect使用,子控件连接)

自定义控件封装&#xff1a; 1.添加新文件&#xff08;设计师界面类&#xff09;&#xff0c;创建子页面 &#xff0c;放自己想要的控件 2.在主页面中使用子控件 :新建一个widget-![在这里插入图片描述](https://img-blog.csdnimg.cn/95ed8015343e4c56a3914853950eff4c.png#pi…