从零开始的目标检测和关键点检测(二):训练一个Glue的RTMDet模型
- 一、config文件解读
- 二、开始训练
- 三、数据集分析
- 四、ncnn部署
从零开始的目标检测和关键点检测(一):用labelme标注数据集
从零开始的目标检测和关键点检测(三):训练一个Glue的RTMPose模型
在[1]用labelme标注自己的数据集
中已经标注好数据集(关键点和检测框),通过labelme2coco脚本将所有的labelme json文件集成为两个coco格式的json文件,即train_coco.json和val_coco.json。训练一个RTMDet模型,需要重写config文件。
一、config文件解读
1、数据集类型即coco格式的数据集,metainfo是指框的类别,因为这里只有一个glue的类,因此NUM_CLASSES为1,注意metainfo类别名后的逗号,
# 数据集类型及路径
dataset_type = 'CocoDataset'
data_root = 'data/glue_134_Keypoint/'
metainfo = {'classes': ('glue',)}
NUM_CLASSES = len(metainfo['classes'])
2、加载backnbone预训练权重和RTMDet-tiny预训练权重
# RTMDet-tiny
load_from = 'https://download.openmmlab.com/mmdetection/v3.0/rtmdet/rtmdet_tiny_8xb32-300e_coco/rtmdet_tiny_8xb32-300e_coco_20220902_112414-78e30dcc.pth'
backbone_pretrain = 'https://download.openmmlab.com/mmdetection/v3.0/rtmdet/cspnext_rsb_pretrain/cspnext-tiny_imagenet_600e.pth'
deepen_factor = 0.167
widen_factor = 0.375
in_channels = [96, 192, 384]
neck_out_channels = 96
num_csp_blocks = 1
exp_on_reg = False
3、训练参数设置,如epoch、batchsize…
MAX_EPOCHS = 200
TRAIN_BATCH_SIZE = 8
VAL_BATCH_SIZE = 4
stage2_num_epochs = 20
base_lr = 0.004
VAL_INTERVAL = 5 # 每隔多少轮评估保存一次模型权重
4、default_runtime,即默认设置,在config文件夹的default_runtime.py
可看到。不同的MM-框架的默认设置不一样(如default_scope = 'mmdet'
),可以包含这个.py也可以直接复制过来。
default_scope = 'mmdet'
default_hooks = dict(
timer=dict(type='IterTimerHook'),
logger=dict(type='LoggerHook', interval=1),
param_scheduler=dict(type='ParamSchedulerHook'),
checkpoint=dict(type='CheckpointHook', interval=10, max_keep_ckpts=2, save_best='coco/bbox_mAP'),
# auto coco/bbox_mAP_50 coco/bbox_mAP_75 coco/bbox_mAP_s
sampler_seed=dict(type='DistSamplerSeedHook'),
visualization=dict(type='DetVisualizationHook'))
env_cfg = dict(
cudnn_benchmark=False,
mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),
dist_cfg=dict(backend='nccl'))
vis_backends = [dict(type='LocalVisBackend')]
visualizer = dict(
type='DetLocalVisualizer',
vis_backends=[dict(type='LocalVisBackend')],
name='visualizer')
log_processor = dict(type='LogProcessor', window_size=50, by_epoch=True)
log_level = 'INFO'
load_from = None
resume = False
5、训练超参数配置
train_cfg = dict(
type='EpochBasedTrainLoop',
max_epochs=MAX_EPOCHS,
val_interval=VAL_INTERVAL,
dynamic_intervals=[(MAX_EPOCHS - stage2_num_epochs, 1)])
val_cfg = dict(type='ValLoop')
test_cfg = dict(type='TestLoop')
# 学习率
param_scheduler = [
dict(
type='LinearLR', start_factor=1e-05, by_epoch=False, begin=0,
end=1000),
dict(
type='CosineAnnealingLR',
eta_min=0.0002,
begin=150,
end=300,
T_max=150,
by_epoch=True,
convert_to_iter_based=True)
]
# 优化器
optim_wrapper = dict(
type='OptimWrapper',
optimizer=dict(type='AdamW', lr=base_lr, weight_decay=0.05),
paramwise_cfg=dict(
norm_decay_mult=0, bias_decay_mult=0, bypass_duplicate=True))
auto_scale_lr = dict(enable=False, base_batch_size=16)
6、数据处理pipeline,做数据预处理(数据增强)
# DataLoader
backend_args = None
train_pipeline = [
dict(type='LoadImageFromFile', backend_args=None),
dict(type='LoadAnnotations', with_bbox=True),
dict(
type='CachedMosaic',
img_scale=(640, 640),
pad_val=114.0,
max_cached_images=20,
random_pop=False),
dict(
type='RandomResize',
scale=(1280, 1280),
ratio_range=(0.5, 2.0),
keep_ratio=True),
dict(type='RandomCrop', crop_size=(640, 640)),
dict(type='YOLOXHSVRandomAug'),
dict(type='RandomFlip', prob=0.5),
dict(type='Pad', size=(640, 640), pad_val=dict(img=(114, 114, 114))),
dict(
type='CachedMixUp',
img_scale=(640, 640),
ratio_range=(1.0, 1.0),
max_cached_images=10,
random_pop=False,
pad_val=(114, 114, 114),
prob=0.5),
dict(type='PackDetInputs')
]
test_pipeline = [
dict(type='LoadImageFromFile', backend_args=None),
dict(type='Resize', scale=(640, 640), keep_ratio=True),
dict(type='Pad', size=(640, 640), pad_val=dict(img=(114, 114, 114))),
dict(
type='PackDetInputs',
meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
'scale_factor'))
]
7、加载数据和标注并用对应pipeliane做预处理
train_dataloader = dict(
batch_size=TRAIN_BATCH_SIZE,
num_workers=4,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=True),
batch_sampler=None,
dataset=dict(
type='CocoDataset',
data_root=data_root,
metainfo=metainfo,
ann_file='train_coco.json',
data_prefix=dict(img='images/'),
filter_cfg=dict(filter_empty_gt=True, min_size=32),
pipeline=train_pipeline,
backend_args=None),
pin_memory=True)
val_dataloader = dict(
batch_size=VAL_BATCH_SIZE,
num_workers=2,
persistent_workers=True,
drop_last=False,
sampler=dict(type='DefaultSampler', shuffle=False),
dataset=dict(
type='CocoDataset',
data_root=data_root,
metainfo=metainfo,
ann_file='val_coco.json',
data_prefix=dict(img='images/'),
test_mode=True,
pipeline=test_pipeline,
backend_args=None))
test_dataloader = val_dataloader
8、定义模型结构backbone + neck + head
# 模型结构
model = dict(
type='RTMDet',
data_preprocessor=dict(
type='DetDataPreprocessor',
mean=[103.53, 116.28, 123.675],
std=[57.375, 57.12, 58.395],
bgr_to_rgb=False,
batch_augments=None),
backbone=dict(
type='CSPNeXt',
arch='P5',
expand_ratio=0.5,
deepen_factor=deepen_factor,
widen_factor=widen_factor,
channel_attention=True,
norm_cfg=dict(type='SyncBN'),
act_cfg=dict(type='SiLU', inplace=True),
init_cfg=dict(
type='Pretrained',
prefix='backbone.',
checkpoint=backbone_pretrain
)),
neck=dict(
type='CSPNeXtPAFPN',
in_channels=in_channels,
out_channels=neck_out_channels,
num_csp_blocks=num_csp_blocks,
expand_ratio=0.5,
norm_cfg=dict(type='SyncBN'),
act_cfg=dict(type='SiLU', inplace=True)),
bbox_head=dict(
type='RTMDetSepBNHead',
num_classes=NUM_CLASSES,
in_channels=neck_out_channels,
stacked_convs=2,
feat_channels=neck_out_channels,
anchor_generator=dict(
type='MlvlPointGenerator', offset=0, strides=[8, 16, 32]),
bbox_coder=dict(type='DistancePointBBoxCoder'),
loss_cls=dict(
type='QualityFocalLoss',
use_sigmoid=True,
beta=2.0,
loss_weight=1.0),
loss_bbox=dict(type='GIoULoss', loss_weight=2.0),
with_objectness=False,
exp_on_reg=exp_on_reg,
share_conv=True,
pred_kernel_size=1,
norm_cfg=dict(type='SyncBN'),
act_cfg=dict(type='SiLU', inplace=True)),
train_cfg=dict(
assigner=dict(type='DynamicSoftLabelAssigner', topk=13),
allowed_border=-1,
pos_weight=-1,
debug=False),
test_cfg=dict(
nms_pre=30000,
min_bbox_size=0,
score_thr=0.001,
nms=dict(type='nms', iou_threshold=0.65),
max_per_img=300))
二、开始训练
1、开始训练
python tools/train.py data/glue_134_Keypoint/rtmdet_tiny_glue.py
训练结果
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.719
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.483
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.766
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.766
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.766
2、测试一下训练结果
python demo/image_demo.py data/glue_134_Keypoint/test_image/test.png data/glue_134_Keypoint/rtmdet_tiny_glue.py --weights work_dirs/rtmdet_tiny_glue/best_coco_bbox_mAP_epoch_180.pth --device cpu
3、可视化训练过程
4、由于标注数据集的glue都是小目标的,因此大目标无法识别,如下:
三、数据集分析
1、可视化部分图像
框标注-框中心点位置分布
框标注-框宽高分布
显然都是小目标的检测
四、ncnn部署
在线模型转换:Deploee
上传文件完成在线转换