【复现论文】DAVE

网站:

GitHub - jerpelhan/DAVE

下载完以后,阅读 readme文件

新建终端,打印文件树,不包含隐藏文件:

命令:tree -I '.*'

.
├── LICENSE
├── README.md
├── demo.py
├── demo_zero.py
├── main.py
├── material
│   ├── 458.jpg
│   ├── 7.jpg
│   ├── 7707.jpg
│   ├── __init__.py
│   ├── arch.png
│   └── qualitative.png
├── models
│   ├── __init__.py
│   ├── backbone.py
│   ├── box_prediction.py
│   ├── dave.py
│   ├── dave_tr.py
│   ├── feat_comparison.py
│   ├── positional_encoding.py
│   ├── regression_head.py
│   └── transformer.py
├── scripts
│   ├── fscd_0_test.sh
│   ├── fscd_0shot_clip.sh
│   ├── fscd_1_test.sh
│   ├── fscd_lvis_test.sh
│   ├── fscd_lvis_unseen_test.sh
│   ├── fscd_multicat.sh
│   ├── fscd_test.sh
│   ├── train_det.sh
│   └── train_sim.sh
├── train_det.py
├── train_similarity.py
└── utils
    ├── __init__.py
    ├── arg_parser.py
    ├── data.py
    ├── data_lvis.py
    ├── eval.py
    ├── helpers.py
    └── losses.py

5 directories, 38 files

(1)新建环境,避免包冲突

选择安装的 dave 环境

(2)pip 依赖项

(3)下载数据文件&模型文件,并保证路径正确

(4)运行 main.py

第一个错误:AttributeError: module 'torch._C' has no attribute '_cuda_setDevice'

所有 gpu 的地方,全部改成 cpu

改动 1

改前:

改后:

 改动 2

去掉并行计算

目前所有的修改都在 main》》evaluate函数     

第一个改动的函数:   

def evaluate(args):

        device = torch.device("cpu")

第二个改动的函数:

def eval_0shot(args):

        print("0shot")

        if args.skip_test:

                return

        args.zero_shot = True

        device = torch.device("cpu")

第三个改动的函数:

def eval_0shot_multicat(args):

        args.zero_shot = True

        device = torch.device("cpu")

第三个改动的函数:

def evaluate_LVIS(args):

        device = torch.device("cpu")

第四个改动的函数:

def evaluate_multicat(args):

        device = torch.device("cpu")

修改完所有的设备(gpu→cpu)以及并行计算

第二个错误:FileNotFoundError: [Errno 2] No such file or directory: 'material/.pth'

代码:

if __name__ == '__main__':
    print("DAVE")
    parser = argparse.ArgumentParser('DAVE', parents=[get_argparser()])
    args = parser.parse_args()
    print(args)

根据代码和文件结构,断点加在 parser 那行

from utils.arg_parser import get_argparser

下载下来,复制进去

步进再单步调试,修改文件路径

修改后,改为相对路径:

数据文件中的内容:

路径修改完毕,再次尝试运行

还是报错:FileNotFoundError: [Errno 2] No such file or directory: 'material/.pth'

断点

再调试,结合后面的条件控制语句,单步打印判断结果,找到错误:

加断点,F9 切换激活断点、继续切换到该断点

控制台抛出warning :

(1)提示'pretrained' is deprecated 这个属性被废除

/Users/dearr/anaconda3/envs/dave/lib/python3.8/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.

(2)
  warnings.warn(
/Users/dearr/anaconda3/envs/dave/lib/python3.8/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=ResNet18_Weights.IMAGENET1K_V1`. You can also use `weights=ResNet18_Weights.DEFAULT` to get the most up-to-date weights.

(3)
  warnings.warn(msg)
/Users/dearr/Downloads/DAVE-master 3/main.py:35: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.

(4)
  torch.load(os.path.join(args.model_path, args.model_name + '.pth'))['model'], strict=False

(5)打印的print
Namespace(aux_weight=0.3, backbone='resnet18', backbone_lr=0, batch_size=4, count_loss_weight=0, d_s=1.0, d_t=3, data_path='data', dataset='fsc147', det_model_name='DAVE', det_train=False, detection_loss_weight=0.01, dropout=0.1, egv=0.132, emb_dim=256, epochs=200, eval_multicat=False, fcos_pred_size=512, i_thr=0.55, image_size=512, kernel_dim=3, lr=0.0001, lr_drop=200, m_s=0.0, max_grad_norm=0.1, min_count_loss_weight=0, model_name='', model_path='material/', norm_s=False, normalized_l2=False, num_dec_layers=3, num_enc_layers=3, num_heads=8, num_objects=3, num_workers=12, orig_dmaps=False, pre_norm=False, prompt_shot=False, reduction=8, resume_training=False, s_t=0.008, skip_cars=False, skip_test=False, skip_train=False, swav_backbone=False, task='fscd147', tiling_p=0.5, unseen=False, use_appearance=False, use_objectness=False, use_query_pos_emb=False, weight_decay=0.0001, zero_shot=False)

设置断点,逐步调试:

定位到出错行数,打印调试,model_name=''所以报错,那具体怎么改呢?

看源代码公开主页,demo 给的命令,demo.py是可以正确运行的,所以照搬照抄。

python demo.py --skip_train --model_name DAVE_3_shot --model_path material --backbone resnet50 --swav_backbone --reduction 8 --num_enc_layers 3 --num_dec_layers 3 --kernel_dim 3 --emb_dim 256 --num_objects 3 --num_workers 8 --use_query_pos_emb --use_objectness --use_appearance --batch_size 1 --pre_norm

第三个错误:RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

问问 copilot,大概也是 torch.load报错,加一个参数即可

    # model.load_state_dict(
    #     torch.load(os.path.join(args.model_path, args.model_name + '.pth'))['model'], strict=False
    # )# 路径报错已修改,设备报错修改为:
    model.load_state_dict(
        torch.load(os.path.join(args.model_path, args.model_name + '.pth'), map_location=torch.device('cpu'))['model'], strict=False
    )    
    # pretrained_dict_feat = {k.split("feat_comp.")[1]: v for k, v in
    #                         torch.load(os.path.join(args.model_path, 'verification.pth'))[
    #                             'model'].items() if 'feat_comp' in k}
    # torch.load报错,修改为,加参数即可:
    pretrained_dict_feat = {k.split("feat_comp.")[1]: v for k, v in
                            torch.load(os.path.join(args.model_path, 'verification.pth'), map_location=torch.device('cpu'))[
                                'model'].items() if 'feat_comp' in k}
    model.module.feat_comp.load_state_dict(pretrained_dict_feat)

第四个错误:AttributeError: 'COTR' object has no attribute 'module'

并行化的错误,我没有并行计算

    # model.module.feat_comp.load_state_dict(pretrained_dict_feat) 修改为:
    model.feat_comp.load_state_dict(pretrained_dict_feat)

第五个错误:RuntimeError: stack expects each tensor to be equal size, but got [1, 400, 248] at entry 0 and [1, 512, 512] at entry 1 libc++abi: terminating due to uncaught exception of type std::__1::system_error: Broken pipe

构思入手点

(1)问问 copilot

(2)加断点调试

(3)仔细看看控制台输出

(4)查看源代码公开主页的讨论区,是否有遇到类似错误的 

GitHub - jerpelhan/DAVE

【issue】归纳-CSDN博客

源代码公开讨论区问题整理

(1)copilot 给出张量尺寸问题

(2)加断点,断点加在哪里?看看控制台输出,大概有谱

控制台输出解读 控制台输出解读-CSDN博客

修改一个警告信息

    # parser.add_argument('--num_workers', default=12, type=int) 修改为:
    parser.add_argument('--num_workers', default=8, type=int)

理由:

Warning: This DataLoader will create 12 worker processes in total. Our suggested max number of worker in current system is 8 (`cpuset` is not taken into account), which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.

忽略所有的警告信息

import warnings
warnings.filterwarnings("ignore")

程序内断点

import pdb;pdb.set_trace()

确实是 for 循环出问题了,也确实是不会改。

(dave) (base) dearr@dearrdeMacBook-Air DAVE-master 3 % /Users/de
arr/anaconda3/envs/dave/bin/python "/Users/dearr/Downloads/DAVE-
master 3/main.py"
val
1286
loading annotations into memory...
Done (t=0.17s)
creating index...
index created!
Traceback (most recent call last):
  File "/Users/dearr/Downloads/DAVE-master 3/main.py", line 147, in <module>
    evaluate(args)
  File "/Users/dearr/anaconda3/envs/dave/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/Users/dearr/Downloads/DAVE-master 3/main.py", line 73, in evaluate
    test_loader_1 = [next(iter(test_loader)) for _ in range(2)]
  File "/Users/dearr/Downloads/DAVE-master 3/main.py", line 73, in <listcomp>
    test_loader_1 = [next(iter(test_loader)) for _ in range(2)]
  File "/Users/dearr/anaconda3/envs/dave/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 630, in __next__
    data = self._next_data()
  File "/Users/dearr/anaconda3/envs/dave/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1344, in _next_data
    return self._process_data(data)
  File "/Users/dearr/anaconda3/envs/dave/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1370, in _process_data
    data.reraise()
  File "/Users/dearr/anaconda3/envs/dave/lib/python3.8/site-packages/torch/_utils.py", line 706, in reraise
    raise exception
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/Users/dearr/anaconda3/envs/dave/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 309, in _worker_loop
    data = fetcher.fetch(index)  # type: ignore[possibly-undefined]
  File "/Users/dearr/anaconda3/envs/dave/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 55, in fetch
    return self.collate_fn(data)
  File "/Users/dearr/anaconda3/envs/dave/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 317, in default_collate
    return collate(batch, collate_fn_map=default_collate_fn_map)
  File "/Users/dearr/anaconda3/envs/dave/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 174, in collate
    return [collate(samples, collate_fn_map=collate_fn_map) for samples in transposed]  # Backwards compatibility.
  File "/Users/dearr/anaconda3/envs/dave/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 174, in <listcomp>
    return [collate(samples, collate_fn_map=collate_fn_map) for samples in transposed]  # Backwards compatibility.
  File "/Users/dearr/anaconda3/envs/dave/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 142, in collate
    return collate_fn_map[elem_type](batch, collate_fn_map=collate_fn_map)
  File "/Users/dearr/anaconda3/envs/dave/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 214, in collate_tensor_fn
    return torch.stack(batch, 0, out=out)
RuntimeError: stack expects each tensor to be equal size, but got [1, 400, 248] at entry 0 and [1, 512, 512] at entry 1

第六个错误:RuntimeError: Caught RuntimeError in DataLoader worker process 0.

尝试修改第一个错误

RuntimeError: Caught RuntimeError in DataLoader worker process 0.

把进程数改成 1

    parser.add_argument('--num_workers', default=1, type=int)

输出:

(dave) (base) dearr@dearrdeMacBook-Air DAVE-master 3 % /Users/dearr/anaconda3/envs/dave/bin/python "/
Users/dearr/Downloads/DAVE-master 3/main.py"
val
1286
loading annotations into memory...
Done (t=0.16s)
creating index...
index created!
Traceback (most recent call last):
  File "/Users/dearr/Downloads/DAVE-master 3/main.py", line 147, in <module>
    evaluate(args)
  File "/Users/dearr/anaconda3/envs/dave/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/Users/dearr/Downloads/DAVE-master 3/main.py", line 73, in evaluate
    test_loader_1 = [next(iter(test_loader)) for _ in range(2)]
  File "/Users/dearr/Downloads/DAVE-master 3/main.py", line 73, in <listcomp>
    test_loader_1 = [next(iter(test_loader)) for _ in range(2)]
  File "/Users/dearr/anaconda3/envs/dave/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 630, in __next__
    data = self._next_data()
  File "/Users/dearr/anaconda3/envs/dave/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1344, in _next_data
    return self._process_data(data)
  File "/Users/dearr/anaconda3/envs/dave/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1370, in _process_data
    data.reraise()
  File "/Users/dearr/anaconda3/envs/dave/lib/python3.8/site-packages/torch/_utils.py", line 706, in reraise
    raise exception
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/Users/dearr/anaconda3/envs/dave/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 309, in _worker_loop
    data = fetcher.fetch(index)  # type: ignore[possibly-undefined]
  File "/Users/dearr/anaconda3/envs/dave/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 55, in fetch
    return self.collate_fn(data)
  File "/Users/dearr/anaconda3/envs/dave/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 317, in default_collate
    return collate(batch, collate_fn_map=default_collate_fn_map)
  File "/Users/dearr/anaconda3/envs/dave/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 174, in collate
    return [collate(samples, collate_fn_map=collate_fn_map) for samples in transposed]  # Backwards compatibility.
  File "/Users/dearr/anaconda3/envs/dave/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 174, in <listcomp>
    return [collate(samples, collate_fn_map=collate_fn_map) for samples in transposed]  # Backwards compatibility.
  File "/Users/dearr/anaconda3/envs/dave/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 142, in collate
    return collate_fn_map[elem_type](batch, collate_fn_map=collate_fn_map)
  File "/Users/dearr/anaconda3/envs/dave/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 214, in collate_tensor_fn
    return torch.stack(batch, 0, out=out)
RuntimeError: stack expects each tensor to be equal size, but got [1, 400, 248] at entry 0 and [1, 512, 512] at entry 1

改了,没用,问问。

改成 0,输出:

(dave) (base) dearr@dearrdeMacBook-Air DAVE-master 3 % /Users/de
arr/anaconda3/envs/dave/bin/python "/Users/dearr/Downloads/DAVE-
master 3/main.py"
val
1286
loading annotations into memory...
Done (t=0.16s)
creating index...
index created!
Traceback (most recent call last):
  File "/Users/dearr/Downloads/DAVE-master 3/main.py", line 147, in <module>
    evaluate(args)
  File "/Users/dearr/anaconda3/envs/dave/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/Users/dearr/Downloads/DAVE-master 3/main.py", line 73, in evaluate
    test_loader_1 = [next(iter(test_loader)) for _ in range(2)]
  File "/Users/dearr/Downloads/DAVE-master 3/main.py", line 73, in <listcomp>
    test_loader_1 = [next(iter(test_loader)) for _ in range(2)]
  File "/Users/dearr/anaconda3/envs/dave/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 630, in __next__
    data = self._next_data()
  File "/Users/dearr/anaconda3/envs/dave/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 673, in _next_data
    data = self._dataset_fetcher.fetch(index)  # may raise StopIteration
  File "/Users/dearr/anaconda3/envs/dave/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 55, in fetch
    return self.collate_fn(data)
  File "/Users/dearr/anaconda3/envs/dave/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 317, in default_collate
    return collate(batch, collate_fn_map=default_collate_fn_map)
  File "/Users/dearr/anaconda3/envs/dave/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 174, in collate
    return [collate(samples, collate_fn_map=collate_fn_map) for samples in transposed]  # Backwards compatibility.
  File "/Users/dearr/anaconda3/envs/dave/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 174, in <listcomp>
    return [collate(samples, collate_fn_map=collate_fn_map) for samples in transposed]  # Backwards compatibility.
  File "/Users/dearr/anaconda3/envs/dave/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 142, in collate
    return collate_fn_map[elem_type](batch, collate_fn_map=collate_fn_map)
  File "/Users/dearr/anaconda3/envs/dave/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 214, in collate_tensor_fn
    return torch.stack(batch, 0, out=out)
RuntimeError: stack expects each tensor to be equal size, but got [1, 400, 248] at entry 0 and [1, 512, 512] at entry 1

改成 0 以后,这个错误确实是没有了

RuntimeError: Caught RuntimeError in DataLoader worker process 0.

继续修改第五个错误

真想不到了,加断点调试进不去循环,只知道是循环出问题了,具体哪里的问题,进不去代码内部,不管怎么加断点都不行。

分析控制台输出:这个 1286 是怎么打印出来了。问了以后,是 data.py  中有 print(len())

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/965852.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

GB/T28181 开源日记[8]:国标开发速知速会

服务端源代码 github.com/gowvp/gb28181 前端源代码 github.com/gowvp/gb28181_web 介绍 go wvp 是 Go 语言实现的开源 GB28181 解决方案&#xff0c;基于GB28181-2022标准实现的网络视频平台&#xff0c;支持 rtmp/rtsp&#xff0c;客户端支持网页版本和安卓 App。支持rts…

完美解决phpstudy安装后mysql无法启动

phpstudy数据库无法启动有以下几个原因。 **一、**自己在电脑上安装了MySQL数据库,MySQL的服务名为MySQL,这会与phpstudy的数据库的服务名发生冲突&#xff0c;从而造成phpstudy中的数据库无法启动&#xff0c;这时我们只需要将自己安装的MySQL的服务名改掉就行。 但是&#…

grafana面板配置opentsdb

新增面板&#xff1a; 这里add-panel: 如果不是想新增面板而是想新增一行条目&#xff0c;则点击convert to row: 在新增的面板这里可以看到选择数据源 Aggregator&#xff1a;聚合条件&#xff0c;区分下第一行和第二行的aggregator&#xff0c;第一个是对指标值的聚合&…

论文翻译学习:《DeepSeek-R1: 通过强化学习激励大型语言模型的推理能力》

摘要 我们介绍了我们的第一代推理模型 DeepSeek-R1-Zero 和 DeepSeek-R1。DeepSeek-R1-Zero 是一个通过大规模强化学习&#xff08;RL&#xff09;训练的模型&#xff0c;没有经过监督微调&#xff08;SFT&#xff09;作为初步步骤&#xff0c;展示了卓越的推理能力。通过强化…

【Uniapp-Vue3】从uniCloud中获取数据

需要先获取数据库对象&#xff1a; let db uniCloud.database(); 获取数据库中数据的方法&#xff1a; db.collection("数据表名称").get(); 所以就可以得到下面的这个模板&#xff1a; let 函数名 async () > { let res await db.collection("数据表名称…

【自然语言处理】TextRank 算法提取关键词(Python实现)

文章目录 前言PageRank 实现TextRank 简单版源码实现jieba工具包实现TextRank 前言 TextRank 算法是一种基于图的排序算法&#xff0c;主要用于文本处理中的关键词提取和文本摘要。它基于图中节点之间的关系来评估节点的重要性&#xff0c;类似于 Google 的 PageRank 算法。Tex…

免费windows pdf编辑工具

Epdf&#xff08;完全免费&#xff09; 作者&#xff1a;不染心 时间&#xff1a;2025/2/6 Github: https://github.com/dog-tired/Epdf Epdf Epdf 是一款使用 Rust 编写的 PDF 编辑器&#xff0c;目前仍在开发中。它提供了一系列实用的命令行选项&#xff0c;方便用户对 PDF …

星闪开发入门级教程之安装编译器与小项目烧录

系列文章目录 星闪开发入门级教程 好久不见&#xff0c;已经好几年没有发文章了&#xff0c;星闪-作为中国原生的新一代近距离无线联接技术品牌。我想着写点东西。为了适合新手&#xff0c;绝对小白文。 文章目录 系列文章目录前言一、Hispark Studio1.安装Hispark Studio2.安…

Caused by: org.springframework.beans.factory.UnsatisfiedDependencyException解决办法

1.问题描述 在编写完一个功能后,第一次启动这个模块的启动类时,报以下错误, 2.文件解决 检查了controller,service和mapper,均未发现有问题,核对了依赖也未发现依赖冲突 在网上也找了资料,有总结的比较好的: controller层service层dao层注解是否都使用正确&#xff1f;接口…

记录 | WPF基础学习Style局部和全局调用

目录 前言一、Style1.1 例子1.2 为样式起名字1.3 BasedOn 继承上一个样式 二、外部StyleStep1 创建资源字典BaseButtonStyle.xamlStep2 在资源字典中写入StyleStep3 App.xaml中写引用路径【全局】Step4 调用三、代码提供四、x:Key和x:Name区别 更新时间 前言 参考文章&#xff…

信创数据库使用问题汇总

笔者工作中需要使用多种信创数据库&#xff0c;在使用过程中发现一些问题&#xff0c;现记录如下。 1 OceanBase-Oracle租户的Python连接方式 用Python连接OB数据库的mysql租户可以使用连接mysql的包&#xff0c;但连接oracle租户是没有官方包的&#xff0c;必须使用基于jdbc…

多光谱成像技术在华为Mate70系列的应用

华为Mate70系列搭载了光谱技术的产物——红枫原色摄像头&#xff0c;这是一款150万像素的多光谱摄像头。 相较于普通摄像头&#xff0c;它具有以下优势&#xff1a; 色彩还原度高&#xff1a;色彩还原准确度提升约 120%&#xff0c;能捕捉更多光谱信息&#xff0c;使拍摄照片色…

Web3 与区块链:开启透明、安全的网络新时代

在这个信息爆炸的时代&#xff0c;我们对网络的透明性、安全性和隐私保护的需求日益增长。Web3&#xff0c;作为新一代互联网的代表&#xff0c;正携手区块链技术&#xff0c;引领我们走向一个更加透明、安全和去中心化的网络世界。本文将深入探讨 Web3 的基本概念、区块链技术…

[Android] 全球网测-版本号4.3.8

[Android] 全球网测 链接&#xff1a;https://pan.xunlei.com/s/VOIV5G3_UOFWnGuMQ_GlIW2OA1?pwdfrpe# 应用介绍 "全球网测"是由中国信通院产业与规划研究所自主研发的一款拥有宽带测速、上网体验和网络诊断等功能的综合测速软件。APP突出六大亮点优势&#xff1a…

AI智算-k8s部署DeepSeek Janus-Pro-7B 多模态大模型

文章目录 简介环境依赖模型下载下载Janus库GPU环境镜像模型manifest调用Janus多模态文生图 简介 DeepSeek Janus Pro 作为一款强大的多模态理解与生成框架&#xff0c;正在成为研究人员和开发者的热门选择。本文将详细介绍如何在云原生k8s环境中部署配置和使用 DeepSeek Janus…

windows 安装nvidaia驱动和cuda

安装nvidaia驱动和cuda 官网搜索下载驱动 https://www.nvidia.cn/drivers/lookup/ 这里查出来的都是最高支持什么版本的cuda 安装时候都默认精简就行 官网下载所需版本的cuda包 https://developer.nvidia.com/cuda-toolkit-archive 安装成功但是nvcc -V 失败 &#xff0c…

HAL库外设宝典:基于CubeMX的STM32开发手册(持续更新)

目录 前言 GPIO&#xff08;通用输入输出引脚&#xff09; 推挽输出模式 浮空输入和上拉输入模式 GPIO其他模式以及内部电路原理 输出驱动器 输入驱动器 中断 外部中断&#xff08;EXTI&#xff09; 深入中断&#xff08;内部机制及原理&#xff09; 外部中断/事件控…

动态规划(01背包问题)

目录 题目内容题目分析未装满情况思路一思路二代码实现滚动数组优化优化代码 恰好装满情况代码实现滚动数组优化 题目内容 你有一个背包&#xff0c;最多能容纳的体积是V。 现在有n个物品&#xff0c;第i个物品的体积为Vi​,价值为Wi &#xff08;1&#xff09;求这个背包至多能…

力扣.270. 最接近的二叉搜索树值(中序遍历思想)

文章目录 题目描述思路复杂度Code 题目描述 思路 遍历思想(利用二叉树的中序遍历) 本题的难点在于可能存在多个答案&#xff0c;并且要返回最小的那一个&#xff0c;为了解决这个问题&#xff0c;我门则要利用上二叉搜索树中序遍历为有序序列的特性&#xff0c;具体到代码中&a…

高效协同,Tita 助力项目管理场景革新

在当今快节奏、高度竞争的商业环境中&#xff0c;企业面临着前所未有的挑战&#xff1a;如何在有限资源下迅速响应市场变化&#xff0c;确保多个项目的高效执行并达成战略目标&#xff1f;答案就在于优化项目集程管理。而在这个过程中&#xff0c;Tita项目管理产品以其独特的优…