分布式多卡训练(DDP)踩坑

多卡训练最近在跑yolov10版本的RT-DETR,用来进行目标检测。

单卡训练语句(正常运行):

python main.py

多卡训练语句:

需要通过torch.distributed.launch来启动,一般是单节点,其中CUDA_VISIBLE_DEVICES设置用的显卡编号,也可以不用,直接在main.py里面指定device也行,–nproc_pre_node 每个节点的显卡数量。

python -m torch.distributed.run --nproc_per_node=3 main.py

CUDA_VISIBLE_DEVICES=0,6,7 python -m torch.distributed.run --nproc_per_node=3 main.py

但是运行多卡训练之后,会报错,有的时候训练进程会卡住。错误信息如下,

[rank0]: Traceback (most recent call last):
[rank0]:   File "/home/zyy23/yolov10/run_detr.py", line 5, in <module>
[rank0]:     model.train(pretrained=True,
[rank0]:   File "/home/zyy23/yolov10/ultralytics/engine/model.py", line 657, in train
[rank0]:     self.trainer.train()
[rank0]:   File "/home/zyy23/yolov10/ultralytics/engine/trainer.py", line 213, in train
[rank0]:     self._do_train(world_size)
[rank0]:   File "/home/zyy23/yolov10/ultralytics/engine/trainer.py", line 381, in _do_train
[rank0]:     self.loss, self.loss_items = self.model(batch)
[rank0]:   File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
[rank0]:     return self._call_impl(*args, **kwargs)
[rank0]:   File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
[rank0]:     return forward_call(*args, **kwargs)
[rank0]:   File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 1632, in forward
[rank0]:     inputs, kwargs = self._pre_forward(*inputs, **kwargs)
[rank0]:   File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 1523, in _pre_forward
[rank0]:     if torch.is_grad_enabled() and self.reducer._rebuild_buckets():
[rank0]: RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`, and by
[rank0]: making sure all `forward` function outputs participate in calculating loss.
[rank0]: If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).
[rank0]: Parameters which did not receive grad for rank 0: model.28.dec_bbox_head.5.layers.2.bias, model.28.dec_bbox_head.5.layers.2.weight, model.28.dec_bbox_head.5.layers.1.bias, model.28.dec_bbox_head.5.layers.1.weight, model.28.dec_bbox_head.5.layers.0.bias, model.28.dec_bbox_head.5.layers.0.weight, model.28.dec_bbox_head.4.layers.2.bias, model.28.dec_bbox_head.4.layers.2.weight, model.28.dec_bbox_head.4.layers.1.bias, model.28.dec_bbox_head.4.layers.1.weight, model.28.dec_bbox_head.4.layers.0.bias, model.28.dec_bbox_head.4.layers.0.weight, model.28.dec_bbox_head.3.layers.2.bias, model.28.dec_bbox_head.3.layers.2.weight, model.28.dec_bbox_head.3.layers.1.bias, model.28.dec_bbox_head.3.layers.1.weight, model.28.dec_bbox_head.3.layers.0.bias, model.28.dec_bbox_head.3.layers.0.weight, model.28.dec_bbox_head.2.layers.2.bias, model.28.dec_bbox_head.2.layers.2.weight, model.28.dec_bbox_head.2.layers.1.bias, model.28.dec_bbox_head.2.layers.1.weight, model.28.dec_bbox_head.2.layers.0.bias, model.28.dec_bbox_head.2.layers.0.weight, model.28.dec_bbox_head.1.layers.2.bias, model.28.dec_bbox_head.1.layers.2.weight, model.28.dec_bbox_head.1.layers.1.bias, model.28.dec_bbox_head.1.layers.1.weight, model.28.dec_bbox_head.1.layers.0.bias, model.28.dec_bbox_head.1.layers.0.weight, model.28.dec_bbox_head.0.layers.2.bias, model.28.dec_bbox_head.0.layers.2.weight, model.28.dec_bbox_head.0.layers.1.bias, model.28.dec_bbox_head.0.layers.1.weight, model.28.dec_bbox_head.0.layers.0.bias, model.28.dec_bbox_head.0.layers.0.weight, model.28.enc_bbox_head.layers.2.bias, model.28.enc_bbox_head.layers.2.weight, model.28.enc_bbox_head.layers.1.bias, model.28.enc_bbox_head.layers.1.weight, model.28.enc_bbox_head.layers.0.bias, model.28.enc_bbox_head.layers.0.weight, model.28.denoising_class_embed.weight
[rank0]: Parameter indices which did not receive grad for rank 0: 510 521 522 523 524 525 526 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574
[rank1]:[E1122 21:12:02.018431947 ProcessGroupGloo.cpp:143] Rank 1 successfully reached monitoredBarrier, but received errors while waiting for send/recv from rank 0. Please check rank 0 logs for faulty rank.
[rank2]:[E1122 21:12:02.018445283 ProcessGroupGloo.cpp:143] Rank 2 successfully reached monitoredBarrier, but received errors while waiting for send/recv from rank 0. Please check rank 0 logs for faulty rank.
[rank1]: Traceback (most recent call last):
[rank1]:   File "/home/zyy23/yolov10/run_detr.py", line 5, in <module>
[rank1]:     model.train(pretrained=True,
[rank1]:   File "/home/zyy23/yolov10/ultralytics/engine/model.py", line 657, in train
[rank1]:     self.trainer.train()
[rank1]:   File "/home/zyy23/yolov10/ultralytics/engine/trainer.py", line 213, in train
[rank1]:     self._do_train(world_size)
[rank1]:   File "/home/zyy23/yolov10/ultralytics/engine/trainer.py", line 389, in _do_train
[rank1]:     self.scaler.scale(self.loss).backward()
[rank1]:   File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/site-packages/torch/_tensor.py", line 521, in backward
[rank1]:     torch.autograd.backward(
[rank1]:   File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/site-packages/torch/autograd/__init__.py", line 289, in backward
[rank1]:     _engine_run_backward(
[rank1]:   File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/site-packages/torch/autograd/graph.py", line 768, in _engine_run_backward
[rank1]:     return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[rank1]: RuntimeError: Rank 1 successfully reached monitoredBarrier, but received errors while waiting for send/recv from rank 0. Please check rank 0 logs for faulty rank.
[rank1]:  Original exception:
[rank1]: [../third_party/gloo/gloo/transport/tcp/pair.cc:534] Connection closed by peer [127.0.1.1]:27022
[rank2]: Traceback (most recent call last):
[rank2]:   File "/home/zyy23/yolov10/run_detr.py", line 5, in <module>
[rank2]:     model.train(pretrained=True,
[rank2]:   File "/home/zyy23/yolov10/ultralytics/engine/model.py", line 657, in train
[rank2]:     self.trainer.train()
[rank2]:   File "/home/zyy23/yolov10/ultralytics/engine/trainer.py", line 213, in train
[rank2]:     self._do_train(world_size)
[rank2]:   File "/home/zyy23/yolov10/ultralytics/engine/trainer.py", line 389, in _do_train
[rank2]:     self.scaler.scale(self.loss).backward()
[rank2]:   File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/site-packages/torch/_tensor.py", line 521, in backward
[rank2]:     torch.autograd.backward(
[rank2]:   File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/site-packages/torch/autograd/__init__.py", line 289, in backward
[rank2]:     _engine_run_backward(
[rank2]:   File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/site-packages/torch/autograd/graph.py", line 768, in _engine_run_backward
[rank2]:     return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[rank2]: RuntimeError: Rank 2 successfully reached monitoredBarrier, but received errors while waiting for send/recv from rank 0. Please check rank 0 logs for faulty rank.
[rank2]:  Original exception:
[rank2]: [../third_party/gloo/gloo/transport/tcp/pair.cc:534] Connection closed by peer [127.0.1.1]:27022
W1122 21:12:02.606069 139664836297920 torch/distributed/elastic/multiprocessing/api.py:858] Sending process 1281666 closing signal SIGTERM
W1122 21:12:02.608416 139664836297920 torch/distributed/elastic/multiprocessing/api.py:858] Sending process 1281667 closing signal SIGTERM
E1122 21:12:02.987694 139664836297920 torch/distributed/elastic/multiprocessing/api.py:833] failed (exitcode: 1) local_rank: 0 (pid: 1281665) of binary: /home/zyy23/anaconda3/envs/mypytorch_3.9/bin/python
Traceback (most recent call last):
  File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/site-packages/torch/distributed/run.py", line 905, in <module>
    main()
  File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 348, in wrapper
    return f(*args, **kwargs)
  File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/site-packages/torch/distributed/run.py", line 901, in main
    run(args)
  File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/site-packages/torch/distributed/run.py", line 892, in run
    elastic_launch(
  File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 133, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/home/zyy23/anaconda3/envs/mypytorch_3.9/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
run_detr.py FAILED
------------------------------------------------------------
Failures:
  <NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2024-11-22_21:12:02
  host      : lab10
  rank      : 0 (local_rank: 0)
  exitcode  : 1 (pid: 1281665)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

发生了runtimerror

RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`, and bymaking sure all `forward` function outputs participate in calculating loss.If you already have done the above, then the distributed data parallel module wasn’t able to locate the output tensors in the return value of your module’s `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).

看不懂的话,用翻译软件翻译一下

运行时错误:预计在开始新迭代之前已完成前一次迭代的减少。此错误表明您的模块具有未用于产生损耗的参数。您可以通过 (1) 将关键字参数 find_unused_parameters=True 传递给 torch.nn.parallel.DistributedDataParallel 来启用未使用的参数检测; (2) 确保所有 forward 函数输出都参与计算损失。如果您已经完成了上述两个步骤,那么分布式数据并行模块无法在模块的 forward 函数的返回值中定位输出张量。报告此问题时,请包括损失函数和模块 forward 返回值的结构(例如 list、dict、iterable)。

错误原因:

  • 定义了网络层却没在forward()中使用

  • forward()返回的参数未用于梯度计算

  • 使用了不进行梯度回归的参数进行优化

有两种解决方法
一是在to torch.nn.parallel.DistributedDataParallel中加入find_unused_parameters参数并设置初始值为True,find_unused_parameters 是 PyTorch 中的一个参数,用于在分布式训练时优化梯度计算。
self.model = nn.parallel.DistributedDataParallel(self.model, 
device_ids=[RANK],find_unused_parameters=True)

find_unused_parameters参数的作用:

检测未使用的参数:当设置为 True 时,PyTorch 会在每个前向传播过程中检查哪些参数没有被使用。这对于某些模型来说非常有用,特别是当模型的某些部分在特定输入下可能不会被触发时。

减少损失计算时的内存开销,提高性能:通过识别并忽略未使用的参数,PyTorch 可以在计算梯度时减少内存开销,从而提高训练效率。在大型模型或复杂网络中,识别未使用的参数可以加速训练过程,尤其是在分布式设置中,因为可以避免不必要的梯度同步。

适用于动态计算图:对于需要动态变化的模型架构,使用 find_unused_parameters=True 可以确保所有参数都被正确处理。

但是它会增加额外的计算开销用于验证哪些参数是未使用的不用参加到损失的计算中,所以最好是仅在需要时使用此参数,尤其是在模型中确实存在未使用参数的情况下。

由于用的是yolov10,封装的太严密,一直找不到这条语句在哪个位置,找了好久,也尝试在模型初始化的位置和命令行加参数,都没成功,后来在ultralytics/engine里面的trainer.py找到了,在_setup_train函数下面。

def _setup_train(self, world_size):
        """Builds dataloaders and optimizer on correct rank process."""

        # Model
        self.run_callbacks("on_pretrain_routine_start")
        ckpt = self.setup_model()
        self.model = self.model.to(self.device)
        self.set_model_attributes()

        # Freeze layers
        freeze_list = (
            self.args.freeze
            if isinstance(self.args.freeze, list)
            else range(self.args.freeze)
            if isinstance(self.args.freeze, int)
            else []
        )
        always_freeze_names = [".dfl"]  # always freeze these layers
        freeze_layer_names = [f"model.{x}." for x in freeze_list] + always_freeze_names
        for k, v in self.model.named_parameters():
            # v.register_hook(lambda x: torch.nan_to_num(x))  # NaN to 0 (commented for erratic training results)
            if any(x in k for x in freeze_layer_names):
                LOGGER.info(f"Freezing layer '{k}'")
                v.requires_grad = False
            elif not v.requires_grad and v.dtype.is_floating_point:  # only floating point Tensor can require gradients
                LOGGER.info(
                    f"WARNING ?? setting 'requires_grad=True' for frozen layer '{k}'. "
                    "See ultralytics.engine.trainer for customization of frozen layers."
                )
                v.requires_grad = True

        # Check AMP
        self.amp = torch.tensor(self.args.amp).to(self.device)  # True or False
        if self.amp and RANK in (-1, 0):  # Single-GPU and DDP
            callbacks_backup = callbacks.default_callbacks.copy()  # backup callbacks as check_amp() resets them
            self.amp = torch.tensor(check_amp(self.model), device=self.device)
            callbacks.default_callbacks = callbacks_backup  # restore callbacks
        if RANK > -1 and world_size > 1:  # DDP
            dist.broadcast(self.amp, src=0)  # broadcast the tensor from rank 0 to all other ranks (returns None)
        self.amp = bool(self.amp)  # as boolean
        self.scaler = torch.cuda.amp.GradScaler(enabled=self.amp)
        if world_size > 1:
            self.model = nn.parallel.DistributedDataParallel(self.model, device_ids=[RANK],find_unused_parameters=True)
二是将环境变量TORCH_DISTRIBUTED_DEBUG设置为INFO或DETAIL,打印有关哪些特定参数在此级别上没有收到梯度的信息,作为此错误的一部分。上述报错信息没有显示这个提示,是因为我们已经使用了这条语句,使用之后,会看到哪些参数没有收到梯度信息。
TORCH_DISTRIBUTED_DEBUG=DETAIL python -m torch.distributed.run --nproc_per_node=3 main.py

或者是下面的语句,也可以看哪些参数没有更新

            for name, param in self.model.named_parameters():
                if param.grad is None:
                    print("The None grad model is:")
                    print(name)

在使用nn.Module的__init__方法时,如果使用self.xx这样的语句定义了层,但是这个层的计算结果后续没有用来计算loss,或者这个self层没有使用,都会导致报错。只需要在模型中仔细检查forward函数和init函数,检查__init__() 方法中是否存在self参数,在 forward 中没有使用的,注释掉即可。(要么使用,要么删除)

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/982013.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

30秒从零搭建机器人管理系统(Trae)

1. 安装 [Trae官网】(https://www.trae.com.cn/) 2. 提示词 创建一个BS架构的机器人远程操控系统&#xff0c;具备机器人状态及位置实时更新&#xff0c;可以实现机器人远程遥控&#xff0c;可以对机器人工作日志进行统计分析&#xff0c;以及其它管理系统的常用功能3. 模型…

软考-数据库开发工程师-3.1-数据结构-线性结构

第3章内容比较多&#xff0c;内容考试分数占比较大&#xff0c;6分左右 线性表 1、线性表的定义 一个线性表是n个元素的有限序列(n≥0)&#xff0c;通常表示为(a1&#xff0c;a2, a3,…an). 2、线性表的顺序存储(顺序表) 是指用一组地址连续的存储单元依次存储线性表中的数据元…

解锁数据潜能,永洪科技以数据之力简化中粮可口可乐决策之路

企业数字化转型是指企业利用数字技术和信息通信技术来改变自身的商业模式、流程和增值服务&#xff0c;以提高企业的竞争力和创新能力。数字化转型已经成为企业发展的重要战略&#xff0c;尤其在当前信息技术高速发展的时代。数字化转型还涉及到企业与消费者之间的互动和沟通。…

Vue 3 整合 WangEditor 富文本编辑器:从基础到高级实践

本文将详细介绍如何在 Vue 3 项目中集成 WangEditor 富文本编辑器&#xff0c;实现图文混排、自定义扩展等高阶功能。 一、为什么选择 WangEditor&#xff1f; 作为国内流行的开源富文本编辑器&#xff0c;WangEditor 具有以下优势&#xff1a; 轻量高效&#xff1a;压缩后仅…

游戏引擎学习第137天

演示资产系统中的一个 bug 我们留下了个问题&#xff0c;你现在可以看到&#xff0c;移动时它没有选择正确的资产。我们知道问题的原因&#xff0c;就在之前我就预见到这个问题会出现。问题是我们的标签系统没有处理周期性边界的匹配问题。当处理像角度这种周期性的标签时&…

监听 RabbitMQ 延时交换机的消息数、OpenFeign 路径参数传入斜杠无法正确转义

背景 【MQ】一套为海量消息和高并发热点消息&#xff0c;提供高可用精准延时服务的解决方案 我现在有一个需求&#xff0c;就是监听 RabbitMQ 一个延时交换机的消息数&#xff0c;而 RabbitTemplate 是不存在对应的方法来获取的。 而我们在 RabbitMQ 的控制台却可以发现延时交…

大数据学习(56)-Impala

&&大数据学习&& &#x1f525;系列专栏&#xff1a; &#x1f451;哲学语录: 承认自己的无知&#xff0c;乃是开启智慧的大门 &#x1f496;如果觉得博主的文章还不错的话&#xff0c;请点赞&#x1f44d;收藏⭐️留言&#x1f4dd;支持一下博主哦&#x1f91…

开发环境搭建-01.前端环境搭建

一.整体结构 Nginx目录必须放在没有中文的目录中才能正常运行&#xff01;&#xff01;&#xff01;

Redis 常见数据类型

官方文档 RedisCommands 1&#xff09;Redis 的命令有上百个&#xff0c;如果纯靠死记硬背比较困难&#xff0c;但是如果理解 Redis 的一些机制&#xff0c;会发现这些命令有很强的通用性。 2&#xff09;Redis 不是万金油&#xff0c;有些数据结构和命令必须在特定场景下使用…

Redis7——进阶篇(三)

前言&#xff1a;此篇文章系本人学习过程中记录下来的笔记&#xff0c;里面难免会有不少欠缺的地方&#xff0c;诚心期待大家多多给予指教。 基础篇&#xff1a; Redis&#xff08;一&#xff09;Redis&#xff08;二&#xff09;Redis&#xff08;三&#xff09;Redis&#x…

云原生时代的技术桥梁

在数字化转型的大潮中&#xff0c;企业面临着数据孤岛、应用间集成复杂、高成本与低效率等问题。这些问题不仅阻碍了企业内部信息的流通和资源的共享&#xff0c;也影响了企业对外部市场变化的响应速度。当前&#xff0c;这一转型过程从IT角度来看&#xff0c;已然迈入云原生时…

ICLR 2025|香港浸会大学可信机器学习和推理课题组专场

点击蓝字 关注我们 AI TIME欢迎每一位AI爱好者的加入&#xff01; AITIME 01 ICLR 2025预讲会团队专场 AITIME 02 专场信息 01 Noisy Test-Time Adaptation in Vision-Language Models 讲者&#xff1a;曹晨涛&#xff0c;HKBU TMLR Group一年级博士生&#xff0c;目前关注基础…

ProfibusDP主站转ModbusTCP网关如何进行数据互换

ProfibusDP主站转ModbusTCP网关如何进行数据互换 在现代工业自动化领域&#xff0c;通信协议的多样性和复杂性不断增加。Profibus DP作为一种经典的现场总线标准&#xff0c;广泛应用于工业控制网络中&#xff1b;而Modbus TCP作为基于以太网的通信协议&#xff0c;因其简单易…

016.3月夏令营:数理类

016.3月夏令营&#xff1a;数理类&#xff1a; 中国人民大学统计学院&#xff1a; http://www.eeban.com/forum.php?modviewthread&tid386109 北京大学化学学院第一轮&#xff1a; http://www.eeban.com/forum.php?m ... 6026&extrapage%3D1 香港大学化学系夏令营&a…

使用IDEA如何隐藏文件或文件夹

选择file -> settings 选择Editor -> File Types ->Ignored Files and Folders (忽略文件和目录) 点击号就可以指定想要隐藏的文件或文件夹

通过微步API接口对单个IP进行查询

import requests import json# 微步API的URL和你的API密钥 API_URL "https://api.threatbook.cn/v3/ip/query" API_KEY "***" # 替换为你的微步API密钥 def query_threatbook(ip):"""查询微步API接口&#xff0c;判断IP是否为可疑"…

第七节:基于Winform框架的串口助手小项目---协议解析《C#编程》

介绍 目标 代码实现 private void serialPort1_DataReceived(object sender, SerialDataReceivedEventArgs e){if (isRxShow false) return;// 1,需要读取有效的数据 BytesToReadbyte[] dataTemp new byte[serialPort1.BytesToRead];serialPort1.Read(dataTemp,0,dataTemp.Le…

关于tresos Studio(EB)的MCAL配置之GPT

概念 GPT&#xff0c;全称General Purpose Timer&#xff0c;就是个通用定时器&#xff0c;取的名字奇怪了点。定时器是一定要的&#xff0c;要么提供给BSW去使用&#xff0c;要么提供给OS去使用。 配置 General GptDeinitApi控制接口Gpt_DeInit是否启用 GptEnableDisable…

C语言基础要素(011):增量、减量运算

让变量自身加一或减一是一种常用的运算&#xff0c;C语言提供了增量与减量运算符支持这些操作。 增量运算() 让变量自身加1&#xff0c;可以这样实现&#xff1a; int size 3; size size 1; // 语句执行后 size 值为 4 size 1; // 语句执行后 size 值为 5使…

深入探索WebGL:解锁网页3D图形的无限可能

深入探索WebGL&#xff1a;解锁网页3D图形的无限可能 引言 。WebGL&#xff0c;作为这一变革中的重要技术&#xff0c;正以其强大的功能和广泛的应用前景&#xff0c;吸引着越来越多的开发者和设计师的关注。本文将深入剖析WebGL的核心原理、关键技术、实践应用&#xff0c;并…