《博主简介》
小伙伴们好,我是阿旭。专注于人工智能、AIGC、python、计算机视觉相关分享研究。
✌更多学习资源,可关注公-仲-hao:【阿旭算法与机器学习】,共同学习交流~
👍感谢小伙伴们点赞、关注!
《------往期经典推荐------》
一、AI应用软件开发实战专栏【链接】
项目名称 | 项目名称 |
---|---|
1.【人脸识别与管理系统开发】 | 2.【车牌识别与自动收费管理系统开发】 |
3.【手势识别系统开发】 | 4.【人脸面部活体检测系统开发】 |
5.【图片风格快速迁移软件开发】 | 6.【人脸表表情识别系统】 |
7.【YOLOv8多目标识别与自动标注软件开发】 | 8.【基于YOLOv8深度学习的行人跌倒检测系统】 |
9.【基于YOLOv8深度学习的PCB板缺陷检测系统】 | 10.【基于YOLOv8深度学习的生活垃圾分类目标检测系统】 |
11.【基于YOLOv8深度学习的安全帽目标检测系统】 | 12.【基于YOLOv8深度学习的120种犬类检测与识别系统】 |
13.【基于YOLOv8深度学习的路面坑洞检测系统】 | 14.【基于YOLOv8深度学习的火焰烟雾检测系统】 |
15.【基于YOLOv8深度学习的钢材表面缺陷检测系统】 | 16.【基于YOLOv8深度学习的舰船目标分类检测系统】 |
17.【基于YOLOv8深度学习的西红柿成熟度检测系统】 | 18.【基于YOLOv8深度学习的血细胞检测与计数系统】 |
19.【基于YOLOv8深度学习的吸烟/抽烟行为检测系统】 | 20.【基于YOLOv8深度学习的水稻害虫检测与识别系统】 |
21.【基于YOLOv8深度学习的高精度车辆行人检测与计数系统】 | 22.【基于YOLOv8深度学习的路面标志线检测与识别系统】 |
23.【基于YOLOv8深度学习的智能小麦害虫检测识别系统】 | 24.【基于YOLOv8深度学习的智能玉米害虫检测识别系统】 |
25.【基于YOLOv8深度学习的200种鸟类智能检测与识别系统】 | 26.【基于YOLOv8深度学习的45种交通标志智能检测与识别系统】 |
27.【基于YOLOv8深度学习的人脸面部表情识别系统】 | 28.【基于YOLOv8深度学习的苹果叶片病害智能诊断系统】 |
29.【基于YOLOv8深度学习的智能肺炎诊断系统】 | 30.【基于YOLOv8深度学习的葡萄簇目标检测系统】 |
31.【基于YOLOv8深度学习的100种中草药智能识别系统】 | 32.【基于YOLOv8深度学习的102种花卉智能识别系统】 |
33.【基于YOLOv8深度学习的100种蝴蝶智能识别系统】 | 34.【基于YOLOv8深度学习的水稻叶片病害智能诊断系统】 |
35.【基于YOLOv8与ByteTrack的车辆行人多目标检测与追踪系统】 | 36.【基于YOLOv8深度学习的智能草莓病害检测与分割系统】 |
37.【基于YOLOv8深度学习的复杂场景下船舶目标检测系统】 | 38.【基于YOLOv8深度学习的农作物幼苗与杂草检测系统】 |
39.【基于YOLOv8深度学习的智能道路裂缝检测与分析系统】 | 40.【基于YOLOv8深度学习的葡萄病害智能诊断与防治系统】 |
41.【基于YOLOv8深度学习的遥感地理空间物体检测系统】 | 42.【基于YOLOv8深度学习的无人机视角地面物体检测系统】 |
43.【基于YOLOv8深度学习的木薯病害智能诊断与防治系统】 | 44.【基于YOLOv8深度学习的野外火焰烟雾检测系统】 |
45.【基于YOLOv8深度学习的脑肿瘤智能检测系统】 | 46.【基于YOLOv8深度学习的玉米叶片病害智能诊断与防治系统】 |
47.【基于YOLOv8深度学习的橙子病害智能诊断与防治系统】 |
二、机器学习实战专栏【链接】,已更新31期,欢迎关注,持续更新中~~
三、深度学习【Pytorch】专栏【链接】
四、【Stable Diffusion绘画系列】专栏【链接】
五、YOLOv8改进专栏【链接】,持续更新中~~
六、YOLO性能对比专栏【链接】,持续更新中~
《------正文------》
前言
本文主要介绍如何打印并且查看YOLOv8网络模型的网络结构配置信息、每一层结构详细信息、以及参数量、计算量等模型相关信息。
该方法同样适用于改进后的模型网络结构信息及相关参数查看。
可用于不同模型进行参数量、计算量等对比使用。
查看配置文件结构信息
在每次进行YOLOv8模型训练前,都会打印相应的模型结构信息,如上图。但是如何自己能够直接打印出上述网络结构配置信息呢?
,博主通过查看源码发现,信息是在源码DetectionModel
类中,打印出来的。因此我们直接使用该类,传入我们自己的模型配置文件,运行该类即可,代码如下:
class DetectionModel(BaseModel):
"""YOLOv8 detection model."""
def __init__(self, cfg="yolov8n.yaml", ch=3, nc=None, verbose=True): # model, input channels, number of classes
"""Initialize the YOLOv8 detection model with the given config and parameters."""
super().__init__()
self.yaml = cfg if isinstance(cfg, dict) else yaml_model_load(cfg) # cfg dict
# Define model
ch = self.yaml["ch"] = self.yaml.get("ch", ch) # input channels
if nc and nc != self.yaml["nc"]:
LOGGER.info(f"Overriding model.yaml nc={self.yaml['nc']} with nc={nc}")
self.yaml["nc"] = nc # override YAML value
self.model, self.save = parse_model(deepcopy(self.yaml), ch=ch, verbose=verbose) # model, savelist
self.names = {i: f"{i}" for i in range(self.yaml["nc"])} # default names dict
self.inplace = self.yaml.get("inplace", True)
# Build strides
m = self.model[-1] # Detect()
if isinstance(m, Detect): # includes all Detect subclasses like Segment, Pose, OBB, WorldDetect
s = 256 # 2x min stride
m.inplace = self.inplace
forward = lambda x: self.forward(x)[0] if isinstance(m, (Segment, Pose, OBB)) else self.forward(x)
m.stride = torch.tensor([s / x.shape[-2] for x in forward(torch.zeros(1, ch, s, s))]) # forward
self.stride = m.stride
m.bias_init() # only run once
else:
self.stride = torch.Tensor([32]) # default stride for i.e. RTDETR
# Init weights, biases
initialize_weights(self)
if verbose:
self.info()
LOGGER.info("")
# 模型网络结构配置文件路径
yaml_path = 'ultralytics/cfg/models/v8/yolov8n.yaml'
# 改进的模型结构路径
# yaml_path = 'ultralytics/cfg/models/v8/yolov8n-CBAM.yaml'
# 传入模型网络结构配置文件cfg, nc为模型检测类别数
DetectionModel(cfg=yaml_path,nc=5)
运行代码后,打印结果如下:
打印结果说明:
可以看到模型配置文件一共有23
行,params
为每一层的参数量大小,module
为每一层的结构名称,arguments
为每一层结构需要传入的参数。最后一行summary
为总的信息参数,模型一共有225层
,参参数量(parameters)
为:3157200,计算量GFLOPs
为:8.9.
查看详细的网络结构
上面只是打印出了网络配置文件每一层相关的信息,如果我们想看更加细致的每一步信息,可以直接使用model.info()
来进行查看,代码如下:
加载训练好的模型或者网络结构配置文件
from ultralytics import YOLO
# 加载训练好的模型或者网络结构配置文件
model = YOLO('best.pt')
# model = YOLO('ultralytics/cfg/models/v8/yolov8n.yaml')
打印模型参数信息:
# 打印模型参数信息
print(model.info())
结果如下:
打印模型每一层结构信息:
在上面代码中加入detailed
参数即可。
print(model.info(detailed=True))
打印信息如下:
layer name gradient parameters shape mu sigma
0 model.0.conv.weight False 432 [16, 3, 3, 3] -0.00404 0.153 torch.float32
1 model.0.bn.weight False 16 [16] 2.97 1.87 torch.float32
2 model.0.bn.bias False 16 [16] 0.244 4.17 torch.float32
3 model.1.conv.weight False 4608 [32, 16, 3, 3] -3.74e-05 0.0646 torch.float32
4 model.1.bn.weight False 32 [32] 5.01 1.12 torch.float32
5 model.1.bn.bias False 32 [32] 0.936 1.51 torch.float32
6 model.2.cv1.conv.weight False 1024 [32, 32, 1, 1] -0.0103 0.0918 torch.float32
7 model.2.cv1.bn.weight False 32 [32] 2.21 1.39 torch.float32
8 model.2.cv1.bn.bias False 32 [32] 0.803 1.39 torch.float32
9 model.2.cv2.conv.weight False 1536 [32, 48, 1, 1] -0.00279 0.0831 torch.float32
10 model.2.cv2.bn.weight False 32 [32] 1.21 0.576 torch.float32
11 model.2.cv2.bn.bias False 32 [32] 0.542 1.13 torch.float32
12 model.2.m.0.cv1.conv.weight False 2304 [16, 16, 3, 3] -0.000871 0.0574 torch.float32
13 model.2.m.0.cv1.bn.weight False 16 [16] 2.36 0.713 torch.float32
14 model.2.m.0.cv1.bn.bias False 16 [16] 1.02 1.71 torch.float32
15 model.2.m.0.cv2.conv.weight False 2304 [16, 16, 3, 3] -0.00071 0.05 torch.float32
16 model.2.m.0.cv2.bn.weight False 16 [16] 2.11 0.519 torch.float32
17 model.2.m.0.cv2.bn.bias False 16 [16] 0.92 1.9 torch.float32
18 model.3.conv.weight False 18432 [64, 32, 3, 3] -0.00137 0.0347 torch.float32
19 model.3.bn.weight False 64 [64] 0.818 0.206 torch.float32
20 model.3.bn.bias False 64 [64] 0.249 0.936 torch.float32
21 model.4.cv1.conv.weight False 4096 [64, 64, 1, 1] -0.00235 0.0553 torch.float32
22 model.4.cv1.bn.weight False 64 [64] 0.824 0.359 torch.float32
23 model.4.cv1.bn.bias False 64 [64] 0.26 0.779 torch.float32
24 model.4.cv2.conv.weight False 8192 [64, 128, 1, 1] -0.0019 0.0474 torch.float32
25 model.4.cv2.bn.weight False 64 [64] 0.718 0.21 torch.float32
26 model.4.cv2.bn.bias False 64 [64] -0.05 0.754 torch.float32
27 model.4.m.0.cv1.conv.weight False 9216 [32, 32, 3, 3] -0.00178 0.0368 torch.float32
28 model.4.m.0.cv1.bn.weight False 32 [32] 0.751 0.154 torch.float32
29 model.4.m.0.cv1.bn.bias False 32 [32] -0.263 0.629 torch.float32
30 model.4.m.0.cv2.conv.weight False 9216 [32, 32, 3, 3] -0.00156 0.0346 torch.float32
31 model.4.m.0.cv2.bn.weight False 32 [32] 0.769 0.203 torch.float32
32 model.4.m.0.cv2.bn.bias False 32 [32] 0.183 0.694 torch.float32
33 model.4.m.1.cv1.conv.weight False 9216 [32, 32, 3, 3] -0.00209 0.0339 torch.float32
34 model.4.m.1.cv1.bn.weight False 32 [32] 0.686 0.106 torch.float32
35 model.4.m.1.cv1.bn.bias False 32 [32] -0.82 0.469 torch.float32
36 model.4.m.1.cv2.conv.weight False 9216 [32, 32, 3, 3] -0.00253 0.0309 torch.float32
37 model.4.m.1.cv2.bn.weight False 32 [32] 1.05 0.242 torch.float32
38 model.4.m.1.cv2.bn.bias False 32 [32] 0.523 0.868 torch.float32
39 model.5.conv.weight False 73728 [128, 64, 3, 3] -0.000612 0.0224 torch.float32
40 model.5.bn.weight False 128 [128] 0.821 0.231 torch.float32
41 model.5.bn.bias False 128 [128] -0.235 0.678 torch.float32
42 model.6.cv1.conv.weight False 16384 [128, 128, 1, 1] -0.00307 0.0347 torch.float32
43 model.6.cv1.bn.weight False 128 [128] 0.895 0.398 torch.float32
44 model.6.cv1.bn.bias False 128 [128] -0.134 0.746 torch.float32
45 model.6.cv2.conv.weight False 32768 [128, 256, 1, 1] -0.00204 0.0303 torch.float32
46 model.6.cv2.bn.weight False 128 [128] 0.773 0.205 torch.float32
47 model.6.cv2.bn.bias False 128 [128] -0.584 0.755 torch.float32
48 model.6.m.0.cv1.conv.weight False 36864 [64, 64, 3, 3] -0.00186 0.0232 torch.float32
49 model.6.m.0.cv1.bn.weight False 64 [64] 1.06 0.176 torch.float32
50 model.6.m.0.cv1.bn.bias False 64 [64] -0.915 0.598 torch.float32
51 model.6.m.0.cv2.conv.weight False 36864 [64, 64, 3, 3] -0.00196 0.0221 torch.float32
52 model.6.m.0.cv2.bn.weight False 64 [64] 0.833 0.25 torch.float32
53 model.6.m.0.cv2.bn.bias False 64 [64] -0.0806 0.526 torch.float32
54 model.6.m.1.cv1.conv.weight False 36864 [64, 64, 3, 3] -0.00194 0.0225 torch.float32
55 model.6.m.1.cv1.bn.weight False 64 [64] 0.916 0.178 torch.float32
56 model.6.m.1.cv1.bn.bias False 64 [64] -1.13 0.732 torch.float32
57 model.6.m.1.cv2.conv.weight False 36864 [64, 64, 3, 3] -0.00147 0.0213 torch.float32
58 model.6.m.1.cv2.bn.weight False 64 [64] 1.18 0.278 torch.float32
59 model.6.m.1.cv2.bn.bias False 64 [64] 0.206 0.767 torch.float32
60 model.7.conv.weight False 294912 [256, 128, 3, 3] -0.000707 0.0142 torch.float32
61 model.7.bn.weight False 256 [256] 0.961 0.197 torch.float32
62 model.7.bn.bias False 256 [256] -0.718 0.451 torch.float32
63 model.8.cv1.conv.weight False 65536 [256, 256, 1, 1] -0.00323 0.0224 torch.float32
64 model.8.cv1.bn.weight False 256 [256] 1.14 0.33 torch.float32
65 model.8.cv1.bn.bias False 256 [256] -0.734 0.573 torch.float32
66 model.8.cv2.conv.weight False 98304 [256, 384, 1, 1] -0.00194 0.0198 torch.float32
67 model.8.cv2.bn.weight False 256 [256] 1.19 0.223 torch.float32
68 model.8.cv2.bn.bias False 256 [256] -0.689 0.486 torch.float32
69 model.8.m.0.cv1.conv.weight False 147456 [128, 128, 3, 3] -0.0011 0.0152 torch.float32
70 model.8.m.0.cv1.bn.weight False 128 [128] 1.14 0.205 torch.float32
71 model.8.m.0.cv1.bn.bias False 128 [128] -0.821 0.72 torch.float32
72 model.8.m.0.cv2.conv.weight False 147456 [128, 128, 3, 3] -0.00112 0.0151 torch.float32
73 model.8.m.0.cv2.bn.weight False 128 [128] 1.65 0.369 torch.float32
74 model.8.m.0.cv2.bn.bias False 128 [128] -0.2 0.606 torch.float32
75 model.9.cv1.conv.weight False 32768 [128, 256, 1, 1] -0.00452 0.0257 torch.float32
76 model.9.cv1.bn.weight False 128 [128] 0.926 0.251 torch.float32
77 model.9.cv1.bn.bias False 128 [128] 1.43 0.654 torch.float32
78 model.9.cv2.conv.weight False 131072 [256, 512, 1, 1] -0.000201 0.018 torch.float32
79 model.9.cv2.bn.weight False 256 [256] 0.936 0.257 torch.float32
80 model.9.cv2.bn.bias False 256 [256] -1.27 0.828 torch.float32
81 model.12.cv1.conv.weight False 49152 [128, 384, 1, 1] -0.00243 0.0255 torch.float32
82 model.12.cv1.bn.weight False 128 [128] 0.87 0.224 torch.float32
83 model.12.cv1.bn.bias False 128 [128] -0.373 0.821 torch.float32
84 model.12.cv2.conv.weight False 24576 [128, 192, 1, 1] -0.00462 0.0284 torch.float32
85 model.12.cv2.bn.weight False 128 [128] 0.73 0.201 torch.float32
86 model.12.cv2.bn.bias False 128 [128] -0.281 0.661 torch.float32
87 model.12.m.0.cv1.conv.weight False 36864 [64, 64, 3, 3] -0.0022 0.0224 torch.float32
88 model.12.m.0.cv1.bn.weight False 64 [64] 0.832 0.134 torch.float32
89 model.12.m.0.cv1.bn.bias False 64 [64] -0.841 0.611 torch.float32
90 model.12.m.0.cv2.conv.weight False 36864 [64, 64, 3, 3] -0.000787 0.0213 torch.float32
91 model.12.m.0.cv2.bn.weight False 64 [64] 0.824 0.182 torch.float32
92 model.12.m.0.cv2.bn.bias False 64 [64] -0.107 0.636 torch.float32
93 model.15.cv1.conv.weight False 12288 [64, 192, 1, 1] -0.00189 0.0321 torch.float32
94 model.15.cv1.bn.weight False 64 [64] 0.536 0.215 torch.float32
95 model.15.cv1.bn.bias False 64 [64] 0.16 0.967 torch.float32
96 model.15.cv2.conv.weight False 6144 [64, 96, 1, 1] -0.00124 0.0364 torch.float32
97 model.15.cv2.bn.weight False 64 [64] 0.561 0.275 torch.float32
98 model.15.cv2.bn.bias False 64 [64] 0.127 0.942 torch.float32
99 model.15.m.0.cv1.conv.weight False 9216 [32, 32, 3, 3] -0.00236 0.0306 torch.float32
100 model.15.m.0.cv1.bn.weight False 32 [32] 0.663 0.14 torch.float32
101 model.15.m.0.cv1.bn.bias False 32 [32] -0.567 0.583 torch.float32
102 model.15.m.0.cv2.conv.weight False 9216 [32, 32, 3, 3] -0.00152 0.0289 torch.float32
103 model.15.m.0.cv2.bn.weight False 32 [32] 0.741 0.161 torch.float32
104 model.15.m.0.cv2.bn.bias False 32 [32] 0.21 0.786 torch.float32
105 model.16.conv.weight False 36864 [64, 64, 3, 3] -0.000977 0.0176 torch.float32
106 model.16.bn.weight False 64 [64] 0.842 0.216 torch.float32
107 model.16.bn.bias False 64 [64] -0.389 0.592 torch.float32
108 model.18.cv1.conv.weight False 24576 [128, 192, 1, 1] -0.00217 0.0241 torch.float32
109 model.18.cv1.bn.weight False 128 [128] 0.876 0.208 torch.float32
110 model.18.cv1.bn.bias False 128 [128] -0.307 0.611 torch.float32
111 model.18.cv2.conv.weight False 24576 [128, 192, 1, 1] -0.00254 0.0237 torch.float32
112 model.18.cv2.bn.weight False 128 [128] 0.726 0.297 torch.float32
113 model.18.cv2.bn.bias False 128 [128] -0.434 0.761 torch.float32
114 model.18.m.0.cv1.conv.weight False 36864 [64, 64, 3, 3] -0.00257 0.0203 torch.float32
115 model.18.m.0.cv1.bn.weight False 64 [64] 0.798 0.175 torch.float32
116 model.18.m.0.cv1.bn.bias False 64 [64] -0.773 0.493 torch.float32
117 model.18.m.0.cv2.conv.weight False 36864 [64, 64, 3, 3] -0.0014 0.0195 torch.float32
118 model.18.m.0.cv2.bn.weight False 64 [64] 1.19 0.285 torch.float32
119 model.18.m.0.cv2.bn.bias False 64 [64] -0.07 0.668 torch.float32
120 model.19.conv.weight False 147456 [128, 128, 3, 3] -0.000819 0.0118 torch.float32
121 model.19.bn.weight False 128 [128] 0.876 0.21 torch.float32
122 model.19.bn.bias False 128 [128] -0.508 0.365 torch.float32
123 model.21.cv1.conv.weight False 98304 [256, 384, 1, 1] -0.00166 0.0159 torch.float32
124 model.21.cv1.bn.weight False 256 [256] 1.06 0.225 torch.float32
125 model.21.cv1.bn.bias False 256 [256] -0.592 0.579 torch.float32
126 model.21.cv2.conv.weight False 98304 [256, 384, 1, 1] -0.00257 0.0148 torch.float32
127 model.21.cv2.bn.weight False 256 [256] 1.03 0.317 torch.float32
128 model.21.cv2.bn.bias False 256 [256] -0.893 0.491 torch.float32
129 model.21.m.0.cv1.conv.weight False 147456 [128, 128, 3, 3] -0.00142 0.0129 torch.float32
130 model.21.m.0.cv1.bn.weight False 128 [128] 1.03 0.223 torch.float32
131 model.21.m.0.cv1.bn.bias False 128 [128] -0.96 0.618 torch.float32
132 model.21.m.0.cv2.conv.weight False 147456 [128, 128, 3, 3] -0.00124 0.0128 torch.float32
133 model.21.m.0.cv2.bn.weight False 128 [128] 1.35 0.253 torch.float32
134 model.21.m.0.cv2.bn.bias False 128 [128] -0.553 0.516 torch.float32
135 model.22.cv2.0.0.conv.weight False 36864 [64, 64, 3, 3] -0.00193 0.0181 torch.float32
136 model.22.cv2.0.0.bn.weight False 64 [64] 0.88 0.351 torch.float32
137 model.22.cv2.0.0.bn.bias False 64 [64] -0.492 0.707 torch.float32
138 model.22.cv2.0.1.conv.weight False 36864 [64, 64, 3, 3] -0.00163 0.0178 torch.float32
139 model.22.cv2.0.1.bn.weight False 64 [64] 2.41 1.04 torch.float32
140 model.22.cv2.0.1.bn.bias False 64 [64] 0.922 0.757 torch.float32
141 model.22.cv2.0.2.weight False 4096 [64, 64, 1, 1] -0.00542 0.0553 torch.float32
142 model.22.cv2.0.2.bias False 64 [64] 0.997 1.39 torch.float32
143 model.22.cv2.1.0.conv.weight False 73728 [64, 128, 3, 3] -0.0017 0.014 torch.float32
144 model.22.cv2.1.0.bn.weight False 64 [64] 1.28 0.485 torch.float32
145 model.22.cv2.1.0.bn.bias False 64 [64] -0.389 0.68 torch.float32
146 model.22.cv2.1.1.conv.weight False 36864 [64, 64, 3, 3] -0.00128 0.0167 torch.float32
147 model.22.cv2.1.1.bn.weight False 64 [64] 2.56 1 torch.float32
148 model.22.cv2.1.1.bn.bias False 64 [64] 0.855 0.564 torch.float32
149 model.22.cv2.1.2.weight False 4096 [64, 64, 1, 1] -0.00756 0.0594 torch.float32
150 model.22.cv2.1.2.bias False 64 [64] 0.992 1.32 torch.float32
151 model.22.cv2.2.0.conv.weight False 147456 [64, 256, 3, 3] -0.000553 0.0115 torch.float32
152 model.22.cv2.2.0.bn.weight False 64 [64] 1.55 0.413 torch.float32
153 model.22.cv2.2.0.bn.bias False 64 [64] -0.259 0.62 torch.float32
154 model.22.cv2.2.1.conv.weight False 36864 [64, 64, 3, 3] -0.00065 0.0149 torch.float32
155 model.22.cv2.2.1.bn.weight False 64 [64] 2.95 0.852 torch.float32
156 model.22.cv2.2.1.bn.bias False 64 [64] 0.822 0.556 torch.float32
157 model.22.cv2.2.2.weight False 4096 [64, 64, 1, 1] -0.00674 0.0609 torch.float32
158 model.22.cv2.2.2.bias False 64 [64] 0.99 1.33 torch.float32
159 model.22.cv3.0.0.conv.weight False 36864 [64, 64, 3, 3] -0.00138 0.0275 torch.float32
160 model.22.cv3.0.0.bn.weight False 64 [64] 1 0.00984 torch.float32
161 model.22.cv3.0.0.bn.bias False 64 [64] 0.00709 0.0156 torch.float32
162 model.22.cv3.0.1.conv.weight False 36864 [64, 64, 3, 3] -0.00549 0.0281 torch.float32
163 model.22.cv3.0.1.bn.weight False 64 [64] 1.01 0.0248 torch.float32
164 model.22.cv3.0.1.bn.bias False 64 [64] 0.0356 0.0581 torch.float32
165 model.22.cv3.0.2.weight False 64 [1, 64, 1, 1] -0.0297 0.0872 torch.float32
166 model.22.cv3.0.2.bias False 1 [1] -7.2 nan torch.float32
167 model.22.cv3.1.0.conv.weight False 73728 [64, 128, 3, 3] -0.00178 0.0217 torch.float32
168 model.22.cv3.1.0.bn.weight False 64 [64] 1 0.0165 torch.float32
169 model.22.cv3.1.0.bn.bias False 64 [64] 0.00244 0.0185 torch.float32
170 model.22.cv3.1.1.conv.weight False 36864 [64, 64, 3, 3] -0.00481 0.0278 torch.float32
171 model.22.cv3.1.1.bn.weight False 64 [64] 1.02 0.0239 torch.float32
172 model.22.cv3.1.1.bn.bias False 64 [64] 0.0411 0.0545 torch.float32
173 model.22.cv3.1.2.weight False 64 [1, 64, 1, 1] -0.0327 0.104 torch.float32
174 model.22.cv3.1.2.bias False 1 [1] -5.8 nan torch.float32
175 model.22.cv3.2.0.conv.weight False 147456 [64, 256, 3, 3] -0.000784 0.0158 torch.float32
176 model.22.cv3.2.0.bn.weight False 64 [64] 1 0.00871 torch.float32
177 model.22.cv3.2.0.bn.bias False 64 [64] 0.00025 0.0116 torch.float32
178 model.22.cv3.2.1.conv.weight False 36864 [64, 64, 3, 3] -0.00326 0.0272 torch.float32
179 model.22.cv3.2.1.bn.weight False 64 [64] 1.06 0.0776 torch.float32
180 model.22.cv3.2.1.bn.bias False 64 [64] 0.048 0.0988 torch.float32
181 model.22.cv3.2.2.weight False 64 [1, 64, 1, 1] -0.0661 0.101 torch.float32
182 model.22.cv3.2.2.bias False 1 [1] -4.48 nan torch.float32
183 model.22.dfl.conv.weight False 16 [1, 16, 1, 1] 7.5 4.76 torch.float32
Model summary: 225 layers, 3011043 parameters, 0 gradients, 8.2 GFLOPs
(225, 3011043, 0, 8.1941504)
可以看到,打印出了模型每一层网络结构的名字、参数量以及该层的结构形状。
本文方法同样适用于ultralytics
框架的其他模型结构,使用方法相同,可用于不同模型进行参数量、计算量等对比使用。
为方便大家学习使用,本文涉及到的所有代码均已打包好。免费获取方式如下:
关注下方名片GZH:【阿旭算法与机器学习】,发送【知识点】即可免费获取
如果文章对你有帮助,麻烦动动你的小手,给点个赞,鼓励一下吧,谢谢~~~
好了,这篇文章就介绍到这里,喜欢的小伙伴感谢给点个赞和关注,更多精彩内容持续更新~~
关于本篇文章大家有任何建议或意见,欢迎在评论区留言交流!