PyTorch DataLoader 报错 “DataLoader worker exited unexpectedly“ 的解决方案

注意:博主没有重写d2l的源代码文件,而是创建了一个新的python文件,并重写了该方法。

一、代码运行日志

C:\Users\Administrator\anaconda3\envs\limu\python.exe G:/PyCharmProjects/limu-d2l/ch03/softmax_regression.py
Traceback (most recent call last):
  File "<string>", line 1, in <module>
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\spawn.py", line 116, in spawn_main
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\spawn.py", line 116, in spawn_main
    exitcode = _main(fd, parent_sentinel)
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\spawn.py", line 125, in _main
    Traceback (most recent call last):
exitcode = _main(fd, parent_sentinel)
  File "<string>", line 1, in <module>
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\spawn.py", line 125, in _main
Traceback (most recent call last):
      File "<string>", line 1, in <module>
prepare(preparation_data)
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\spawn.py", line 236, in prepare
    prepare(preparation_data)
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\spawn.py", line 236, in prepare
    _fixup_main_from_path(data['init_main_from_path'])
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\spawn.py", line 287, in _fixup_main_from_path
    _fixup_main_from_path(data['init_main_from_path'])
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\spawn.py", line 287, in _fixup_main_from_path
    main_content = runpy.run_path(main_path,
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\runpy.py", line 265, in run_path
    main_content = runpy.run_path(main_path,
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\runpy.py", line 265, in run_path
    return _run_module_code(code, init_globals, run_name,
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\runpy.py", line 97, in _run_module_code
    return _run_module_code(code, init_globals, run_name,
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\runpy.py", line 97, in _run_module_code
    _run_code(code, mod_globals, init_globals,
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\runpy.py", line 87, in _run_code
    _run_code(code, mod_globals, init_globals,
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\runpy.py", line 87, in _run_code
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\spawn.py", line 116, in spawn_main
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\spawn.py", line 116, in spawn_main
    exec(code, run_globals)
      File "G:\PyCharmProjects\limu-d2l\ch03\softmax_regression.py", line 97, in <module>
exec(code, run_globals)
  File "G:\PyCharmProjects\limu-d2l\ch03\softmax_regression.py", line 97, in <module>
    train_ch03(net, train_iter, test_iter, cross_entropy, num_epochs, updater)
          File "G:\PyCharmProjects\limu-d2l\ch03\softmax_regression.py", line 81, in train_ch03
exitcode = _main(fd, parent_sentinel)
train_ch03(net, train_iter, test_iter, cross_entropy, num_epochs, updater)
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\spawn.py", line 125, in _main
  File "G:\PyCharmProjects\limu-d2l\ch03\softmax_regression.py", line 81, in train_ch03
    exitcode = _main(fd, parent_sentinel)
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\spawn.py", line 125, in _main
        train_loss, train_acc = train_epoch_ch03(net, train_iter, loss, updater)train_loss, train_acc = train_epoch_ch03(net, train_iter, loss, updater)

  File "G:\PyCharmProjects\limu-d2l\ch03\softmax_regression.py", line 64, in train_epoch_ch03
  File "G:\PyCharmProjects\limu-d2l\ch03\softmax_regression.py", line 64, in train_epoch_ch03
    prepare(preparation_data)
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\spawn.py", line 236, in prepare
    prepare(preparation_data)
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\spawn.py", line 236, in prepare
        for X, y in train_iter:for X, y in train_iter:

  File "C:\Users\Administrator\anaconda3\envs\limu\lib\site-packages\torch\utils\data\dataloader.py", line 441, in __iter__
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\site-packages\torch\utils\data\dataloader.py", line 441, in __iter__
        _fixup_main_from_path(data['init_main_from_path'])_fixup_main_from_path(data['init_main_from_path'])

  File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\spawn.py", line 287, in _fixup_main_from_path
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\spawn.py", line 287, in _fixup_main_from_path
        main_content = runpy.run_path(main_path,main_content = runpy.run_path(main_path,

  File "C:\Users\Administrator\anaconda3\envs\limu\lib\runpy.py", line 265, in run_path
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\runpy.py", line 265, in run_path
    return self._get_iterator()
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\site-packages\torch\utils\data\dataloader.py", line 388, in _get_iterator
    return self._get_iterator()
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\site-packages\torch\utils\data\dataloader.py", line 388, in _get_iterator
        return _run_module_code(code, init_globals, run_name,return _run_module_code(code, init_globals, run_name,

  File "C:\Users\Administrator\anaconda3\envs\limu\lib\runpy.py", line 97, in _run_module_code
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\runpy.py", line 97, in _run_module_code
    return _MultiProcessingDataLoaderIter(self)
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\site-packages\torch\utils\data\dataloader.py", line 1042, in __init__
    return _MultiProcessingDataLoaderIter(self)
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\site-packages\torch\utils\data\dataloader.py", line 1042, in __init__
    _run_code(code, mod_globals, init_globals,    
_run_code(code, mod_globals, init_globals,
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\runpy.py", line 87, in _run_code
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\runpy.py", line 87, in _run_code
    exec(code, run_globals)    
exec(code, run_globals)
  File "G:\PyCharmProjects\limu-d2l\ch03\softmax_regression.py", line 97, in <module>
  File "G:\PyCharmProjects\limu-d2l\ch03\softmax_regression.py", line 97, in <module>
    train_ch03(net, train_iter, test_iter, cross_entropy, num_epochs, updater)
      File "G:\PyCharmProjects\limu-d2l\ch03\softmax_regression.py", line 81, in train_ch03
train_ch03(net, train_iter, test_iter, cross_entropy, num_epochs, updater)
  File "G:\PyCharmProjects\limu-d2l\ch03\softmax_regression.py", line 81, in train_ch03
    w.start()
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\process.py", line 121, in start
    w.start()
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\process.py", line 121, in start
        train_loss, train_acc = train_epoch_ch03(net, train_iter, loss, updater)train_loss, train_acc = train_epoch_ch03(net, train_iter, loss, updater)

  File "G:\PyCharmProjects\limu-d2l\ch03\softmax_regression.py", line 64, in train_epoch_ch03
  File "G:\PyCharmProjects\limu-d2l\ch03\softmax_regression.py", line 64, in train_epoch_ch03
    self._popen = self._Popen(self)
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\context.py", line 224, in _Popen
        for X, y in train_iter:for X, y in train_iter:

  File "C:\Users\Administrator\anaconda3\envs\limu\lib\site-packages\torch\utils\data\dataloader.py", line 441, in __iter__
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\site-packages\torch\utils\data\dataloader.py", line 441, in __iter__
    self._popen = self._Popen(self)
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\context.py", line 224, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\context.py", line 327, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\context.py", line 327, in _Popen
        return self._get_iterator()return self._get_iterator()

  File "C:\Users\Administrator\anaconda3\envs\limu\lib\site-packages\torch\utils\data\dataloader.py", line 388, in _get_iterator
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\site-packages\torch\utils\data\dataloader.py", line 388, in _get_iterator
    return Popen(process_obj)
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\popen_spawn_win32.py", line 45, in __init__
    return Popen(process_obj)            return _MultiProcessingDataLoaderIter(self)return _MultiProcessingDataLoaderIter(self)prep_data = spawn.get_preparation_data(process_obj._name)


  File "C:\Users\Administrator\anaconda3\envs\limu\lib\site-packages\torch\utils\data\dataloader.py", line 1042, in __init__
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\site-packages\torch\utils\data\dataloader.py", line 1042, in __init__

  File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\spawn.py", line 154, in get_preparation_data
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\popen_spawn_win32.py", line 45, in __init__
    prep_data = spawn.get_preparation_data(process_obj._name)
      File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\spawn.py", line 154, in get_preparation_data
_check_not_importing_main()
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\spawn.py", line 134, in _check_not_importing_main
    _check_not_importing_main()
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\spawn.py", line 134, in _check_not_importing_main
    raise RuntimeError('''
RuntimeError: 
        An attempt has been made to start a new process before the
        current process has finished its bootstrapping phase.

        This probably means that you are not using fork to start your
        child processes and you have forgotten to use the proper idiom
        in the main module:

            if __name__ == '__main__':
                freeze_support()
                ...

        The "freeze_support()" line can be omitted if the program
        is not going to be frozen to produce an executable.
    w.start()
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\process.py", line 121, in start
    w.start()
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\process.py", line 121, in start
    raise RuntimeError('''
RuntimeError: 
        An attempt has been made to start a new process before the
        current process has finished its bootstrapping phase.

        This probably means that you are not using fork to start your
        child processes and you have forgotten to use the proper idiom
        in the main module:

            if __name__ == '__main__':
                freeze_support()
                ...

        The "freeze_support()" line can be omitted if the program
        is not going to be frozen to produce an executable.
    self._popen = self._Popen(self)
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\context.py", line 224, in _Popen
    self._popen = self._Popen(self)
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\context.py", line 224, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
      File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\context.py", line 327, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\context.py", line 327, in _Popen
    return Popen(process_obj)
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\popen_spawn_win32.py", line 45, in __init__
    return Popen(process_obj)
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\popen_spawn_win32.py", line 45, in __init__
    prep_data = spawn.get_preparation_data(process_obj._name)
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\spawn.py", line 154, in get_preparation_data
    prep_data = spawn.get_preparation_data(process_obj._name)
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\spawn.py", line 154, in get_preparation_data
    _check_not_importing_main()
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\spawn.py", line 134, in _check_not_importing_main
    _check_not_importing_main()
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\spawn.py", line 134, in _check_not_importing_main
    raise RuntimeError('''
    raise RuntimeError('''
RuntimeError: RuntimeError
        An attempt has been made to start a new process before the
        current process has finished its bootstrapping phase.

        This probably means that you are not using fork to start your
        child processes and you have forgotten to use the proper idiom
        in the main module:

            if __name__ == '__main__':
                freeze_support()
                ...

        The "freeze_support()" line can be omitted if the program
        is not going to be frozen to produce an executable.: 

        An attempt has been made to start a new process before the
        current process has finished its bootstrapping phase.

        This probably means that you are not using fork to start your
        child processes and you have forgotten to use the proper idiom
        in the main module:

            if __name__ == '__main__':
                freeze_support()
                ...

        The "freeze_support()" line can be omitted if the program
        is not going to be frozen to produce an executable.
Traceback (most recent call last):
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\site-packages\torch\utils\data\dataloader.py", line 1132, in _try_get_data
    data = self._data_queue.get(timeout=timeout)
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\multiprocessing\queues.py", line 108, in get
    raise Empty
_queue.Empty

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "G:/PyCharmProjects/limu-d2l/ch03/softmax_regression.py", line 97, in <module>
    train_ch03(net, train_iter, test_iter, cross_entropy, num_epochs, updater)
  File "G:/PyCharmProjects/limu-d2l/ch03/softmax_regression.py", line 81, in train_ch03
    train_loss, train_acc = train_epoch_ch03(net, train_iter, loss, updater)
  File "G:/PyCharmProjects/limu-d2l/ch03/softmax_regression.py", line 64, in train_epoch_ch03
    for X, y in train_iter:
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\site-packages\torch\utils\data\dataloader.py", line 633, in __next__
    data = self._next_data()
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\site-packages\torch\utils\data\dataloader.py", line 1328, in _next_data
    idx, data = self._get_data()
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\site-packages\torch\utils\data\dataloader.py", line 1294, in _get_data
    success, data = self._try_get_data()
  File "C:\Users\Administrator\anaconda3\envs\limu\lib\site-packages\torch\utils\data\dataloader.py", line 1145, in _try_get_data
    raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e
RuntimeError: DataLoader worker (pid(s) 14032, 23312, 21048, 1952) exited unexpectedly

Process finished with exit code 1

二、问题分析

这个错误是由于在使用多进程 DataLoader 时出现的问题,通常与 Windows 操作系统相关。在 Windows 上,使用多进程的 DataLoader 可能会导致一些问题,这与 Windows 的进程模型不太兼容。

三、解决方案(使用单进程 DataLoader)

在 Windows 上,将 DataLoader 的 num_workers 参数设置为 0,以使用单进程 DataLoader。这会禁用多进程加载数据,虽然可能会导致数据加载速度变慢,但通常可以解决与多进程 DataLoader 相关的问题。

d2l.load_data_fashion_mnist(batch_size)源代码

def get_dataloader_workers():
    """Use 4 processes to read the data.

    Defined in :numref:`sec_utils`"""
    return 4

def load_data_fashion_mnist(batch_size, resize=None):
    """Download the Fashion-MNIST dataset and then load it into memory.

    Defined in :numref:`sec_utils`"""
    trans = [transforms.ToTensor()]
    if resize:
        trans.insert(0, transforms.Resize(resize))
    trans = transforms.Compose(trans)
    mnist_train = torchvision.datasets.FashionMNIST(
        root="../data", train=True, transform=trans, download=True)
    mnist_test = torchvision.datasets.FashionMNIST(
        root="../data", train=False, transform=trans, download=True)
    return (torch.utils.data.DataLoader(mnist_train, batch_size, shuffle=True,
                                        num_workers=get_dataloader_workers()),
            torch.utils.data.DataLoader(mnist_test, batch_size, shuffle=False,
                                        num_workers=get_dataloader_workers()))

在代码中使用修改后的load_data_fashion_mnist函数

def load_data_fashion_mnist(batch_size, resize=None, num_workers=4):
    """下载Fashion-MNIST数据集,然后将其加载到内存中"""
    trans = [transforms.ToTensor()]
    if resize:
        trans.index(0, transforms.Resize(resize))
    trans = transforms.Compose(trans)

    mnist_train = torchvision.datasets.FashionMNIST(root='../data', train=True, transform=trans, download=True)
    mnist_test = torchvision.datasets.FashionMNIST(root='../data', train=False, transform=trans, download=True)
    return (data.DataLoader(mnist_train, batch_size, shuffle=True, num_workers=num_workers),
            data.DataLoader(mnist_test, batch_size, shuffle=False, num_workers=num_workers))
train_iter, test_iter = fashion_mnist.load_data_fashion_mnist(batch_size, num_workers=0)

在这里插入图片描述

四、为什么将 DataLoader 的 num_workers 参数设置为 0,是使用的单进程,而不是零进程呢?

在 PyTorch 的 DataLoader 中,num_workers 参数控制了用于加载数据的子进程数量。当 num_workers 被设置为 0 时,实际上是表示不使用任何子进程来加载数据,即单进程加载数据。

为什么不是零进程?这是因为 DataLoader 需要至少一个进程来加载数据,这个进程被称为主进程。主进程负责数据加载和分发给训练的进程。当 num_workers 设置为 0 时,只有主进程用于加载和处理数据,没有额外的子进程。这是一种单进程的数据加载方式。

如果将 num_workers 设置为 1,则会有一个额外的子进程来加载数据,总共会有两个进程:一个主进程和一个数据加载子进程。这种设置可以在某些情况下提高数据加载的效率,特别是当数据加载耗时较长时,子进程可以并行地加载数据,从而加速训练过程。

五、完整训练代码

import torch
from d2l import torch as d2l
import fashion_mnist

batch_size = 256
train_iter, test_iter = fashion_mnist.load_data_fashion_mnist(batch_size, num_workers=0)

# 初始化模型参数
num_inputs = 784  # 每个输入图像的通道数为1, 高度和宽度均为28像素
num_outputs = 10

W = torch.normal(0, 0.01, size=(num_inputs, num_outputs), requires_grad=True)
b = torch.zeros(num_outputs, requires_grad=True)


# 定义softmax操作
def softmax(X):
    """
    矩阵中的非常大或非常小的元素可能造成数值上溢或者下溢
    解决方案: P84 3.7.2 重新审视softmax的实现
    """
    X_exp = torch.exp(X)
    partition = X_exp.sum(1, keepdim=True)
    return X_exp / partition


# 定义模型
def net(X):
    return softmax(torch.matmul(X.reshape((-1, W.shape[0])), W) + b)


# 定义损失函数
def cross_entropy(y_hat, y):
    return - torch.log(y_hat[range(len(y_hat)), y])


# 分类精度
def accuracy(y_hat, y):
    """计算预测正确的数量"""
    if len(y_hat.shape) > 1 and y_hat.shape[1] > 1:
        y_hat = y_hat.argmax(axis=1)
    cmp = y_hat.type(y.dtype) == y
    return float(cmp.type(y.dtype).sum())


def evaluate_accuracy(net, data_iter):
    """计算在制定数据集上模型的精度"""
    if isinstance(net, torch.nn.Module):
        net.eval()
    metric = d2l.Accumulator(2)
    with torch.no_grad():
        for X, y in data_iter:
            metric.add(accuracy(net(X), y), y.numel())
    return metric[0] / metric[1]


# 训练
def train_epoch_ch03(net, train_iter, loss, updater):
    if isinstance(net, torch.nn.Module):
        net.train()
    # 训练损失总和, 训练准确度总和, 样本数
    metric = d2l.Accumulator(3)
    for X, y in train_iter:
        y_hat = net(X)
        l = loss(y_hat, y)
        if isinstance(updater, torch.optim.Optimizer):
            updater.zero_grad()
            l.mean().backward()
            updater.step()
        else:
            l.sum().backward()
            updater(X.shape[0])
        metric.add(float(l.sum()), accuracy(y_hat, y), y.numel())
    # 返回训练损失和训练精度
    return metric[0] / metric[2], metric[1] / metric[2]


def train_ch03(net, train_iter, test_iter, loss, num_epochs, updater):
    for epoch in range(num_epochs):
        train_loss, train_acc = train_epoch_ch03(net, train_iter, loss, updater)
        test_acc = evaluate_accuracy(net, test_iter)
        print(f'epoch {epoch + 1}, train_loss {train_loss:f}, train_acc {train_acc:f}, test_acc {test_acc:f}')
    assert train_loss < 0.5, train_loss
    assert train_acc <= 1 and train_acc > 0.7, train_acc
    assert test_acc <= 1 and test_acc > 0.7, test_acc


lr = 0.1


def updater(batch_size):
    return d2l.sgd([W, b], lr, batch_size)


num_epochs = 10
train_ch03(net, train_iter, test_iter, cross_entropy, num_epochs, updater)

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/88720.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

solidity0.8.0的应用案例12:通用可升级合约UUPS

代理合约中选择器冲突(Selector Clash)的另一个解决办法:通用可升级代理(UUPS,universal upgradeable proxy standard)。代码由OpenZeppelin的UUPSUpgradeable简化而成,不应用于生产。 UUPS 作为透明代理的替代方案,UUPS也能解决"选择器冲突"(Selector Cl…

CSDN编程题-每日一练(2023-08-25)

CSDN编程题-每日一练&#xff08;2023-08-25&#xff09; 一、题目名称&#xff1a;影分身二、题目名称&#xff1a;小鱼的航程(改进版)三、题目名称&#xff1a;排查网络故障 一、题目名称&#xff1a;影分身 时间限制&#xff1a;1000ms内存限制&#xff1a;256M 题目描述&am…

pnpm无法加载文件 (解决方法 )

现在要运行一个TS的项目&#xff0c;我的电脑上没有安装pnpm&#xff0c;导致我的vscode一直报错无法加载。 pnpm安装&#xff1a; npm install -g pnpm pnpm : 无法加载文件 pnpm : 无法加载文件 C:\Users\HP\AppData\Roaming\npm\pnpm.ps1&#xff0c;因为在此系统上禁止运…

adb shell setprop 、开发者选项

App性能调试详解 Android App性能监控工具 更多系统属性参考 一、开启 GPU Render 的profiling bar&#xff1a; Gpu渲染速度 adb shell setprop debug.hwui.profile true adb shell setprop debug.hwui.profile visual_bars adb shell setprop debug.hwui.profile visual…

GO-vscode远程开发和调试

本文内容主要包括&#xff1a; 概述&#xff1a; 主要就是把代码放到服务器上然后远程去开发和调试 工具&#xff1a; vscode 远程端&#xff1a; linux 一.安装远程插件 vscode安装Remote - SSH&#xff0c;Remote Explorer&#xff0c;Remote Development&#xff0c…

pdf编辑文字怎么编辑?这几种简单编辑方法看一看

pdf编辑文字怎么编辑&#xff1f;PDF文件是一种普遍的文档格式&#xff0c;但是在编辑时却比较困难。幸运的是&#xff0c;有许多PDF编辑器可以帮助我们轻松地编辑PDF文件。本文将介绍一些简单的PDF编辑方法&#xff0c;跟着我一起来看看吧&#xff01; 第一种方法&#xff1a;…

无涯教程-Python - 多线程

运行多个线程类似于同时运行多个不同的程序&#xff0c;但具有以下优点- 一个进程中的多个线程与主线程共享相同的数据空间&#xff0c;因此比起单进程&#xff0c;它们可以更轻松地共享信息或彼此通信。有时称为轻量级进程的线程&#xff0c;它们不需要太多的内存开销。 开始…

Socket基本原理

一、简单介绍 Socket&#xff0c;又称套接字&#xff0c;是Linux跨进程通信&#xff08;IPC&#xff0c;Inter Process Communication&#xff09;方式的一种。相比于其他IPC方式&#xff0c;Socket牛逼在于可做到同一台主机内跨进程通信&#xff0c;不同主机间的跨进程通信。…

webscoket在vue中的使用

项目场景&#xff1a; 提示&#xff1a;项目相关背景&#xff1a; 什么是webscoket&#xff1f;: WebSocket是一种计算机通信协议&#xff0c;通过单个TCP连接提供全双工通信信道。实现了web客户端和服务器之间的实时通信&#xff0c;与传统的HTTP连接相比&#xff0c;允许以…

Android JNI系列详解之CMake编译工具的使用

一、CMake工具的介绍 如图所示&#xff0c;CMake工具的主要作用是&#xff0c;将C/C编写的native源文件编译打包生成库文件&#xff08;包含动态库或者静态库文件&#xff09;&#xff0c;集成到Android中使用。 二、CMake编译工具的使用 使用主要是配置两个文件&#xff1a;CM…

HelpLook 免费版与商业版的比较,帮助您快速选择!

HelpLook 是一款零代码、开箱即用的帮助中心及博客网站搭建工具&#xff0c;只需简单几步&#xff0c;即可帮助企业、机构、个人发布在线品牌内容站点。 HelpLook 提供多个版本方案供不同需求的用户选择&#xff0c;今天想着重跟大家分享免费版和商业版&#xff0c;将从三个方面…

7、Spring_AOP

一、Spring AOP 简介 1.概述 对于spring来说&#xff0c;有三大组件&#xff0c;IOC&#xff0c;ID&#xff0c;AOP aop概述&#xff1a;AOP(Aspect Oriented Programming)面向切面编程。 作用&#xff1a;不改变原有代码设计的基础上实现功能增强 例子 传统打印日志 使用…

华为云CodeArts Snap 智能编程助手PyCharm实验手册. 插件安装与使用指南

作为一款自主创新的AI代码辅助编程工具&#xff0c;华为云智能编程助手CodeArts Snap目标打造现代化开发新范式。通过将自然语言转化为规范可阅读、无开源漏洞的安全编程语言&#xff0c;提升开发者编程效率&#xff0c;助力企业快速响应市场需求。华为云CodeArts Snap现进入邀…

vue(element ui安装)

目录 一&#xff0c;element ui安装二&#xff0c;main.js三&#xff0c;使用element ui最后 一&#xff0c;element ui安装 先在盘服中找到你创建的node的位置 如有不懂根据可以看看上一章安装node 然后在终端找到 进入这个位置之后就可以安装了 输入npm i element-ui -S这个…

【BASH】回顾与知识点梳理(三十八)

【BASH】回顾与知识点梳理 三十八 三十八. 源码概念及简单编译38.1 开放源码的软件安装与升级简介什么是开放源码、编译程序与可执行文件什么是函式库什么是 make 与 configure什么是 Tarball 的软件如何安装与升级软件 38.2 使用传统程序语言进行编译的简单范例单一程序&#…

《Linux从练气到飞升》No.17 进程创建

&#x1f57a;作者&#xff1a; 主页 我的专栏C语言从0到1探秘C数据结构从0到1探秘Linux菜鸟刷题集 &#x1f618;欢迎关注&#xff1a;&#x1f44d;点赞&#x1f64c;收藏✍️留言 &#x1f3c7;码字不易&#xff0c;你的&#x1f44d;点赞&#x1f64c;收藏❤️关注对我真的…

Spring MVC详解

文章目录 一、SpringMVC1.1 引言1.2 MVC架构1.2.1 概念1.2.2 好处 二、开发流程2.1 导入依赖2.2 配置核心(前端)控制器2.3 后端控制器2.4 配置文件2.5 访问 三、接收请求参数3.1 基本类型参数3.2 实体收参【重点】3.3 数组收参3.4 集合收参 【了解】3.5 路径参数3.6 中文乱码 四…

JDK JRE JVM 三者之间的详解

JDK : Java Development Kit JRE: Java Runtime Environment JVM : JAVA Virtual Machine JDK : Java Development Kit JDK : Java Development Kit【 Java开发者工具】&#xff0c;可以从上图可以看出&#xff0c;JDK包含JRE&#xff1b;java自己的一些开发工具中&#…

容灾设备系统组成,容灾备份系统组成包括哪些

随着信息技术的快速发展&#xff0c;企业对数据的需求越来越大&#xff0c;数据已经成为企业的核心财产。但是&#xff0c;数据安全性和完整性面临巨大挑战。在这种环境下&#xff0c;容灾备份系统应运而生&#xff0c;成为保证企业数据安全的关键因素。下面我们就详细介绍容灾…

Vue3项目实战

目录 一、项目准备 二、基础语法应用 2.1、mixin应用 2.2、网络请求 2.3、显示与隐藏 2.4、编程式路由跳转 2.5、下载资料 2.6、调用方法 2.7、监听路由变化 2.8、pinia应用 (1)存储token(user.js) (2)全选全不选案例(car.js) 一、项目准备 下载&#xff1a; cnp…