用 Python 从零开始创建神经网络(六):优化(Optimization)介绍

优化(Optimization)介绍

  • 引言

引言

在随机初始化的模型中,或者即使是采用更复杂方法初始化的模型中,我们的目标是随着时间的推移培训或教育一个模型。为了训练一个模型,我们调整权重和偏差以提高模型的准确性和置信度。为此,我们需要计算模型的错误量。损失函数,也被称为成本函数,是量化模型错误程度的算法。损失是这一指标的衡量。由于损失是模型的错误,我们理想情况下希望它为0。

你可能会想知道为什么我们不根据 argmax 准确度来计算模型的错误。回想我们之前的置信度示例:[0.22, 0.6, 0.18] 对比 [0.32, 0.36, 0.32]。如果正确的类确实是中间的那一个(索引1),那么两个例子之间的模型准确性将是相同的。但是这两个例子真的像彼此那样准确吗?它们不是,因为准确性只是简单地应用一个 argmax 到输出上,以找到最大值的索引。神经网络的输出实际上是置信度,对正确答案的更多置信度是更好的。因此,我们努力增加正确的置信度并减少错误放置的置信度:

import matplotlib.pyplot as plt
import nnfs
from nnfs.datasets import vertical_data

nnfs.init()

X, y = vertical_data(samples=100, classes=3)
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap='brg')
plt.show()

在这里插入图片描述

利用之前创建的代码,我们可以将这个新数据集与一个简单的神经网络结合使用:

# Create dataset
X, y = vertical_data(samples=100, classes=3)

# Create model
dense1 = Layer_Dense(2, 3) # first dense layer, 2 inputs
activation1 = Activation_ReLU()
dense2 = Layer_Dense(3, 3) # second dense layer, 3 inputs, 3 outputs
activation2 = Activation_Softmax()

# Create loss function
loss_function = Loss_CategoricalCrossentropy()

然后创建一些变量,以跟踪最佳损耗和相关的权重:

# Helper variables
lowest_loss = 9999999 # some initial value
best_dense1_weights = dense1.weights.copy()
best_dense1_biases = dense1.biases.copy()
best_dense2_weights = dense2.weights.copy()
best_dense2_biases = dense2.biases.copy()

我们将损失初始化为一个较大的值,当发现一个新的、较小的损失时,就会将其减小。我们还复制了权重和偏置值(copy() 可以确保完整复制,而不是引用对象)。现在,我们根据需要进行多次迭代,为权重和偏置值选择随机值,如果权重和偏置值产生的损失最小,则保存权重和偏置值:

for iteration in range(10000):
    # Generate a new set of weights for iteration
    dense1.weights = 0.05 * np.random.randn(2, 3)
    dense1.biases = 0.05 * np.random.randn(1, 3)
    dense2.weights = 0.05 * np.random.randn(3, 3)
    dense2.biases = 0.05 * np.random.randn(1, 3)
    
    # Perform a forward pass of the training data through this layer
    dense1.forward(X)
    activation1.forward(dense1.output)
    dense2.forward(activation1.output)
    activation2.forward(dense2.output)
    
    # Perform a forward pass through activation function
    # it takes the output of second dense layer here and returns loss
    loss = loss_function.calculate(activation2.output, y)
    
    # Calculate accuracy from output of activation2 and targets
    # calculate values along first axis
    predictions = np.argmax(activation2.output, axis=1)
    accuracy = np.mean(predictions==y)
    
    # If loss is smaller - print and save weights and biases aside
    if loss < lowest_loss:
        print('New set of weights found, iteration:', iteration, 'loss:', loss, 'acc:', accuracy)
        best_dense1_weights = dense1.weights.copy()
        best_dense1_biases = dense1.biases.copy()
        best_dense2_weights = dense2.weights.copy()
        best_dense2_biases = dense2.biases.copy()
        lowest_loss = loss
>>>
New set of weights found, iteration: 0 loss: 1.0986564 acc:
0.3333333333333333
New set of weights found, iteration: 3 loss: 1.098138 acc:
0.3333333333333333
New set of weights found, iteration: 117 loss: 1.0980115 acc:
0.3333333333333333
New set of weights found, iteration: 124 loss: 1.0977516 acc: 0.6
New set of weights found, iteration: 165 loss: 1.097571 acc:
0.3333333333333333
New set of weights found, iteration: 552 loss: 1.0974693 acc: 0.34
New set of weights found, iteration: 778 loss: 1.0968257 acc:
0.3333333333333333
New set of weights found, iteration: 4307 loss: 1.0965533 acc:
0.3333333333333333
New set of weights found, iteration: 4615 loss: 1.0964499 acc:
0.3333333333333333
New set of weights found, iteration: 9450 loss: 1.0964295 acc:
0.3333333333333333

完整代码:

import numpy as np
import matplotlib.pyplot as plt
import nnfs
from nnfs.datasets import vertical_data


nnfs.init()


# Dense layer
class Layer_Dense:
    # Layer initialization
    def __init__(self, n_inputs, n_neurons):
        # Initialize weights and biases
        self.weights = 0.01 * np.random.randn(n_inputs, n_neurons)
        self.biases = np.zeros((1, n_neurons))
        
    # Forward pass
    def forward(self, inputs):
        # Calculate output values from inputs, weights and biases
        self.output = np.dot(inputs, self.weights) + self.biases
        

# ReLU activation
class Activation_ReLU:
    # Forward pass
    def forward(self, inputs):
        # Calculate output values from input
        self.output = np.maximum(0, inputs)
        
        
# Softmax activation
class Activation_Softmax:
    # Forward pass
    def forward(self, inputs):
        # Get unnormalized probabilities
        exp_values = np.exp(inputs - np.max(inputs, axis=1, 
                                            keepdims=True))
        # Normalize them for each sample
        probabilities = exp_values / np.sum(exp_values, axis=1, 
                                            keepdims=True)
        self.output = probabilities
        
        
# Common loss class
class Loss:
    # Calculates the data and regularization losses
    # given model output and ground truth values
    def calculate(self, output, y):
        # Calculate sample losses
        sample_losses = self.forward(output, y)
        # Calculate mean loss
        data_loss = np.mean(sample_losses)
        # Return loss
        return data_loss
    
    
# Cross-entropy loss
class Loss_CategoricalCrossentropy(Loss):
    # Forward pass
    def forward(self, y_pred, y_true):
        # Number of samples in a batch
        samples = len(y_pred)
        # Clip data to prevent division by 0
        # Clip both sides to not drag mean towards any value
        y_pred_clipped = np.clip(y_pred, 1e-7, 1 - 1e-7)
        # Probabilities for target values -
        # only if categorical labels
        if len(y_true.shape) == 1:
            correct_confidences = y_pred_clipped[range(samples), y_true]
        # Mask values - only for one-hot encoded labels
        elif len(y_true.shape) == 2:
            correct_confidences = np.sum(y_pred_clipped * y_true, axis=1)
        # Losses
        negative_log_likelihoods = -np.log(correct_confidences)
        return negative_log_likelihoods


nnfs.init()

# Create dataset
X, y = vertical_data(samples=100, classes=3)

# Create model
dense1 = Layer_Dense(2, 3) # first dense layer, 2 inputs
activation1 = Activation_ReLU()
dense2 = Layer_Dense(3, 3) # second dense layer, 3 inputs, 3 outputs
activation2 = Activation_Softmax()

# Create loss function
loss_function = Loss_CategoricalCrossentropy()

# Helper variables
lowest_loss = 9999999 # some initial value
best_dense1_weights = dense1.weights.copy()
best_dense1_biases = dense1.biases.copy()
best_dense2_weights = dense2.weights.copy()
best_dense2_biases = dense2.biases.copy()

for iteration in range(100000):
    # Generate a new set of weights for iteration
    dense1.weights = 0.05 * np.random.randn(2, 3)
    dense1.biases = 0.05 * np.random.randn(1, 3)
    dense2.weights = 0.05 * np.random.randn(3, 3)
    dense2.biases = 0.05 * np.random.randn(1, 3)
    
    # Perform a forward pass of the training data through this layer
    dense1.forward(X)
    activation1.forward(dense1.output)
    dense2.forward(activation1.output)
    activation2.forward(dense2.output)
    
    # Perform a forward pass through activation function
    # it takes the output of second dense layer here and returns loss
    loss = loss_function.calculate(activation2.output, y)
    
    # Calculate accuracy from output of activation2 and targets
    # calculate values along first axis
    predictions = np.argmax(activation2.output, axis=1)
    accuracy = np.mean(predictions==y)
    
    # If loss is smaller - print and save weights and biases aside
    if loss < lowest_loss:
        print('New set of weights found, iteration:', iteration, 'loss:', loss, 'acc:', accuracy)
        best_dense1_weights = dense1.weights.copy()
        best_dense1_biases = dense1.biases.copy()
        best_dense2_weights = dense2.weights.copy()
        best_dense2_biases = dense2.biases.copy()
        lowest_loss = loss

损失当然有所下降,但幅度不大。准确率也没有提高,只有一种情况例外,即模型随机找到了一组权重,从而提高了准确率。不过,在损失相当大的情况下,这种状态并不稳定。再运行 90,000 次迭代,总计 100,000 次:

>>>
New set of weights found, iteration: 13361 loss: 1.0963014 acc: 0.3333333333333333
New set of weights found, iteration: 14001 loss: 1.0959858 acc: 0.3333333333333333
New set of weights found, iteration: 24598 loss: 1.0947443 acc: 0.3333333333333333

损耗继续下降,但精确度没有变化。这似乎不是一种可靠的最小化损失的方法。运行 10 亿次迭代后,最佳结果(损失最小)如下:

>>>
New set of weights found, iteration: 229865000 loss: 1.0911305 acc:
0.3333333333333333

即使是使用这种基础数据集,我们也可以看到,随机搜索权重和偏差的组合需要的时间太长,无法成为一个可接受的方法。另一个想法可能是,不是在每次迭代中都用随机选择的值来设置参数,而是应用这些值的一部分到参数上。通过这种方法,权重将根据当前给我们带来最低损失的结果进行更新,而不是无目的地随机更新。如果调整减少了损失,我们将使其成为新的调整起点。如果由于调整而导致损失增加,那么我们将回到之前的点。使用之前类似的代码,我们将首先从随机选择权重和偏差转变为随机调整它们:

# Update weights with some small random values
dense1.weights += 0.05 * np.random.randn(2, 3)
dense1.biases += 0.05 * np.random.randn(1, 3)
dense2.weights += 0.05 * np.random.randn(3, 3)
dense2.biases += 0.05 * np.random.randn(1, 3)

然后,我们将把结尾的 if 语句改为:

# If loss is smaller - print and save weights and biases aside
if loss < lowest_loss:
	print('New set of weights found, iteration:', iteration, 'loss:', loss, 'acc:', accuracy)
	best_dense1_weights = dense1.weights.copy()
	best_dense1_biases = dense1.biases.copy()
	best_dense2_weights = dense2.weights.copy()
	best_dense2_biases = dense2.biases.copy()
	lowest_loss = loss
	# Revert weights and biases
else:
	dense1.weights = best_dense1_weights.copy()
	dense1.biases = best_dense1_biases.copy()
	dense2.weights = best_dense2_weights.copy()
	dense2.biases = best_dense2_biases.copy()

修改后完整代码:

import numpy as np
import matplotlib.pyplot as plt
import nnfs
from nnfs.datasets import vertical_data


# Dense layer
class Layer_Dense:
    # Layer initialization
    def __init__(self, n_inputs, n_neurons):
        # Initialize weights and biases
        self.weights = 0.01 * np.random.randn(n_inputs, n_neurons)
        self.biases = np.zeros((1, n_neurons))
        
    # Forward pass
    def forward(self, inputs):
        # Calculate output values from inputs, weights and biases
        self.output = np.dot(inputs, self.weights) + self.biases
        

# ReLU activation
class Activation_ReLU:
    # Forward pass
    def forward(self, inputs):
        # Calculate output values from input
        self.output = np.maximum(0, inputs)
        
        
# Softmax activation
class Activation_Softmax:
    # Forward pass
    def forward(self, inputs):
        # Get unnormalized probabilities
        exp_values = np.exp(inputs - np.max(inputs, axis=1, 
                                            keepdims=True))
        # Normalize them for each sample
        probabilities = exp_values / np.sum(exp_values, axis=1, 
                                            keepdims=True)
        self.output = probabilities
        
        
# Common loss class
class Loss:
    # Calculates the data and regularization losses
    # given model output and ground truth values
    def calculate(self, output, y):
        # Calculate sample losses
        sample_losses = self.forward(output, y)
        # Calculate mean loss
        data_loss = np.mean(sample_losses)
        # Return loss
        return data_loss
    
    
# Cross-entropy loss
class Loss_CategoricalCrossentropy(Loss):
    # Forward pass
    def forward(self, y_pred, y_true):
        # Number of samples in a batch
        samples = len(y_pred)
        # Clip data to prevent division by 0
        # Clip both sides to not drag mean towards any value
        y_pred_clipped = np.clip(y_pred, 1e-7, 1 - 1e-7)
        # Probabilities for target values -
        # only if categorical labels
        if len(y_true.shape) == 1:
            correct_confidences = y_pred_clipped[range(samples), y_true]
        # Mask values - only for one-hot encoded labels
        elif len(y_true.shape) == 2:
            correct_confidences = np.sum(y_pred_clipped * y_true, axis=1)
        # Losses
        negative_log_likelihoods = -np.log(correct_confidences)
        return negative_log_likelihoods


nnfs.init()

# Create dataset
X, y = vertical_data(samples=100, classes=3)

# Create model
dense1 = Layer_Dense(2, 3) # first dense layer, 2 inputs
activation1 = Activation_ReLU()
dense2 = Layer_Dense(3, 3) # second dense layer, 3 inputs, 3 outputs
activation2 = Activation_Softmax()

# Create loss function
loss_function = Loss_CategoricalCrossentropy()

# Helper variables
lowest_loss = 9999999 # some initial value
best_dense1_weights = dense1.weights.copy()
best_dense1_biases = dense1.biases.copy()
best_dense2_weights = dense2.weights.copy()
best_dense2_biases = dense2.biases.copy()

for iteration in range(10000):
    # Update weights with some small random values
    dense1.weights += 0.05 * np.random.randn(2, 3)
    dense1.biases += 0.05 * np.random.randn(1, 3)
    dense2.weights += 0.05 * np.random.randn(3, 3)
    dense2.biases += 0.05 * np.random.randn(1, 3)
    
    # Perform a forward pass of the training data through this layer
    dense1.forward(X)
    activation1.forward(dense1.output)
    dense2.forward(activation1.output)
    activation2.forward(dense2.output)
    
    # Perform a forward pass through activation function
    # it takes the output of second dense layer here and returns loss
    loss = loss_function.calculate(activation2.output, y)
    
    # Calculate accuracy from output of activation2 and targets
    # calculate values along first axis
    predictions = np.argmax(activation2.output, axis=1)
    accuracy = np.mean(predictions==y)
    
    # If loss is smaller - print and save weights and biases aside
    if loss < lowest_loss:
        print('New set of weights found, iteration:', iteration, 'loss:', loss, 'acc:', accuracy)
        best_dense1_weights = dense1.weights.copy()
        best_dense1_biases = dense1.biases.copy()
        best_dense2_weights = dense2.weights.copy()
        best_dense2_biases = dense2.biases.copy()
        lowest_loss = loss
    # Revert weights and biases
    else:
        dense1.weights = best_dense1_weights.copy()
        dense1.biases = best_dense1_biases.copy()
        dense2.weights = best_dense2_weights.copy()
        dense2.biases = best_dense2_biases.copy()
>>>
New set of weights found, iteration: 0 loss: 1.0987684 acc: 0.3333333333333333 
...
New set of weights found, iteration: 29 loss: 1.0725244 acc: 0.5266666666666666
New set of weights found, iteration: 30 loss: 1.0724432 acc: 0.3466666666666667 
...
New set of weights found, iteration: 48 loss: 1.0303522 acc: 0.6666666666666666
New set of weights found, iteration: 49 loss: 1.0292586 acc: 0.6666666666666666 
...
New set of weights found, iteration: 97 loss: 0.9277446 acc: 0.7333333333333333 
...
New set of weights found, iteration: 152 loss: 0.73390484 acc: 0.8433333333333334
New set of weights found, iteration: 156 loss: 0.7235515 acc: 0.87
New set of weights found, iteration: 160 loss: 0.7049076 acc: 0.9066666666666666 
...
New set of weights found, iteration: 7446 loss: 0.17280102 acc: 0.9333333333333333
New set of weights found, iteration: 9397 loss: 0.17279711 acc: 0.93

这次的损失下降了不少,准确率也大幅提高。应用一小部分随机值所得到的结果,我们几乎可以称之为解决方案。如果尝试 100,000 次迭代,也不会有太大进展:

>>>
...
New set of weights found, iteration: 14206 loss: 0.1727932 acc:
0.9333333333333333
New set of weights found, iteration: 63704 loss: 0.17278232 acc:
0.9333333333333333

让我们用之前看到的螺旋数据集来试试:

from nnfs.datasets import spiral_data

X, y = spiral_data(samples=100, classes=3)
>>>
New set of weights found, iteration: 0 loss: 1.1008677 acc: 0.3333333333333333 
...
New set of weights found, iteration: 31 loss: 1.0982264 acc: 0.37333333333333335 
...
New set of weights found, iteration: 65 loss: 1.0954362 acc: 0.38333333333333336
New set of weights found, iteration: 67 loss: 1.093989 acc: 0.4166666666666667 
...
New set of weights found, iteration: 129 loss: 1.0874122 acc: 0.42333333333333334 
...
New set of weights found, iteration: 5415 loss: 1.0790575 acc: 0.39

这次训练几乎毫无进展。损失略有减少,精确度勉强高于初始值。稍后我们会了解到,造成这种情况的最可能原因叫做损失的局部最小值。数据复杂度在这里也并非无关紧要。事实证明,难题之所以难,是有原因的,我们需要更聪明地处理这个问题。

本章的章节代码、更多资源和勘误表:https://nnfs.io/ch6

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/915882.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

架构篇(04理解架构的演进)

目录 学习前言 一、架构演进 1. 初始阶段的网站架构 2. 应用服务和数据服务分离 3. 使用缓存改善网站性能 4. 使用应用服务器集群改善网站的并发处理能力 5. 数据库读写分离 6. 使用反向代理和CDN加上网站相应 7. 使用分布式文件系统和分布式数据库系统 8. 使用NoSQL和…

Linux软件包管理与Vim编辑器使用指南

目录 一、Linux软件包管理器yum 1.什么是软件包&#xff1f; 2.什么是软件包管理器&#xff1f; 3.查看软件包 4.安装软件 ​编辑 5.卸载软件 Linux开发工具&#xff1a; 二、Linux编辑器---vim 1.vim的基本概念 (1) 正常/普通模式&#xff08;Normal mode&#xff0…

嵌入式硬件实战基础篇(一)-STM32+DAC0832 可调信号发生器-产生方波-三角波-正弦波

引言&#xff1a;本内容主要用作于学习巩固嵌入式硬件内容知识&#xff0c;用于想提升下述能力&#xff0c;针对学习STM32与DAC0832产生波形以及波形转换&#xff0c;对于硬件的降压和对于前面硬件篇的实际运用&#xff0c;针对仿真的使用&#xff0c;具体如下&#xff1a; 设…

怎么样绑定域名到AWS(亚马逊云)服务器

1&#xff0c;拿着你买的域名去亚马逊申请一个证书。申请证书分两种&#xff0c;一种是去亚马逊后台填域名手动申请 &#xff0c;另一种是通过API来申请&#xff0c;类似如下代码&#xff1a; 2、证验证书。有两种方式&#xff1a;一种是通过邮件&#xff0c;另一种去到域名提供…

【网络安全】公钥基础设施

1. PKI 定义 1.1 公钥基础设施的概念 公钥基础设施&#xff08;Public Key Infrastructure&#xff0c;简称PKI&#xff09;是一种基于公钥密码学的系统&#xff0c;它提供了一套完整的解决方案&#xff0c;用于管理和保护通过互联网传输的信息。PKI的核心功能包括密钥管理、…

【计算机网络】UDP网络程序

一、服务端 1.udpServer.hpp 此文件负责实现一个udp服务器 #pragma once#include <iostream> #include <string> #include <cstdlib> #include <cstring> #include <functional> #include <strings.h> #include <unistd.h> #incl…

定时器简介

TIM(Timer定时器)简介 在第一部分,我们主要讲的是定时器基本定时的功能&#xff0c;也就是定一个时间&#xff0c;然后让定时器每隔这个时间产生一个中断&#xff0c;来实现每隔一个固定时间执行一段程序的目的&#xff0c;比如你要做个时钟、秒表&#xff0c;或者使用一些程序…

【论文阅读】HITS: High-coverage LLM-based Unit Test Generation via Method Slicing

HITS: High-coverage LLM-based Unit Test Generation via Method Slicing 1. 来源出处 本文是发表在2024年39th IEEE/ACM International Conference on Automated Software Engineering (ASE)上的论文。作者包括Zejun Wang, Kaiibo Liu, Ge Li和Zhi Jin,他们来自北京的PKU …

多模态大模型开启AI社交新纪元,Soul App创始人张璐团队亮相2024 GITEX GLOBAL

随着AI在全球范围内的加速发展和广泛应用,各行业纷纷在此领域发力。作为全球最大的科技盛会之一,2024年的GITEX GLOBAL将目光再次聚焦于人工智能的飞速发展,吸引了超过6700家来自各个领域的企业参与。在这样的背景下,Soul App作为国内较早将AI技术应用于社交领域的平台,首次亮相…

爬虫开发工具与环境搭建——使用Postman和浏览器开发者工具

第三节&#xff1a;使用Postman和浏览器开发者工具 在网络爬虫开发过程中&#xff0c;我们经常需要对HTTP请求进行测试、分析和调试。Postman和浏览器开发者工具&#xff08;特别是Network面板和Console面板&#xff09;是两种最常用的工具&#xff0c;能够帮助开发者有效地捕…

Zabbix中文监控指标数据乱码

1&#xff09;点击主机&#xff0c;选择Zabbix server 中的 图形 一项&#xff0c;可以看到当前显示的为乱码 2&#xff09; 下载字体文件&#xff1a; https://gitcode.com/open-source-toolkit/4a3db/blob/main/SimHei.zip 解压unzip -x SimHei.zip 3&#xff09; 替换字体文…

HBase理论_HBase架构组件介绍

近来有些空闲时间&#xff0c;正好最近也在开发HBase相关内容&#xff0c;借此整理一下学习和对HBase组件的架构的记录和个人感受&#xff0c;付出了老夫不少心血啊&#xff0c;主要介绍的就是HBase的架构设计以及我的拓展内容。内容如有不当或有其他理解 matirx70163.com HB…

微信小程序自定义顶部导航栏(适配各种机型)

效果图 1.pages.js&#xff0c;需要自定义导航栏的页面设置"navigationStyle": "custom" 2.App.vue,获取设备高度及胶囊位置 onLaunch: function () {// 系统信息const systemInfo uni.getSystemInfoSync()// 胶囊按钮位置信息const menuButtonInfo uni.…

ArkTs简单入门案例:简单的图片切换应用界面

在鸿蒙 OS 应用开发的过程中&#xff0c;我们常常需要通过组合各种组件和编写相应的逻辑来实现丰富多样的功能。今天&#xff0c;我就来和大家详细解析一段实现简单图片切换功能的代码&#xff0c;希望能帮助到那些刚接触鸿蒙 OS 应用开发的朋友们。 一、代码导入部分 Entry …

【项目组件】第三方库——websocketpp

目录 第三方协议&#xff1a;websocket websocket简介 websocket特点 websocket协议切换 websocket协议格式段 websocketpp库介绍 endpoint server connection websocketpp库搭建服务器流程 基本框架实现 业务处理回调函数的实现 http_callback open_callback …

【手撕 Spring】 -- Bean 的创建以及获取

&#x1f308;手写简化版 Spring 框架&#xff1a;通过构建一个精简版的 Spring 框架&#xff0c;深入理解 Spring 的核心机制&#xff0c;掌握其设计思想&#xff0c;进一步提升编程能力 &#x1f308;项目代码地址&#xff1a;https://github.com/YYYUUU42/mini-Spring 如果该…

Jdbc学习笔记(四)--PreparedStatement对象、sql攻击(安全问题)

目录 &#xff08;一&#xff09;使用PreparedStatement对象的原因&#xff1a; 使用Statement对象编写sql语句会遇到的问题 ​编辑 &#xff08;二&#xff09;sql攻击 1.什么是sql攻击 2.演示sql攻击 &#xff08;三&#xff09;防止SQL攻击 1.PreparedStatement是什么 …

Jmeter中的定时器(二)

5--JSR223 Timmer 功能特点 自定义延迟逻辑&#xff1a;使用脚本语言动态计算请求之间的延迟时间。灵活控制&#xff1a;可以根据测试数据和条件动态调整延迟时间。支持多种脚本语言&#xff1a;支持 Groovy、JavaScript、BeanShell 等多种脚本语言。 支持的脚本语言 Groov…

【Istio】Istio原理

第一章 Istio原理 一、服务网格(servicemesh)1、六个时代2、服务网格定义及优缺点二、Istio1、Istio定义2、Istio安装3、Istio架构1.5版本之前1.5版本之后4、bookinfo案例架构部署5、CRD一、服务网格(servicemesh) 微服务:架构风格,职责单一,api通信 服务网格:微服务时代的…

4.远程访问及控制

SSH 简介&#xff1a; SSH&#xff08;Secure Shell&#xff09;协议是一种安全通道协议&#xff0c;对通信数据进行了加密处理&#xff0c;用于远程管理。 OpenSSH简介 OpenSSH 服务名称&#xff1a;sshd 服务端主程序&#xff1a;/usr/sbin/sshd 服务端配置文件&#xf…