有时候,我们需要对视频的敏感信息做模糊处理,比如模糊人脸,车牌。
有时候,也需要对整帧做模糊,或者遮挡。比如这个例子。
下面介绍几种模糊的办法。
1. 通过nvosd
deepstream-test1是DeepStream最简单的一个例子,这个例子跑了一个对象检测模型,完整的管道是“file-source -> h264-parser -> nvh264-decoder -> pgie -> nvvidconv -> nvosd -> video-renderer”。
如果想模糊所有对象,可以在osd_sink_pad_buffer_probe中添加如下代码。
for (l_obj = frame_meta->obj_meta_list; l_obj != NULL;
l_obj = l_obj->next) {
obj_meta = (NvDsObjectMeta *) (l_obj->data);
if (obj_meta->class_id == PGIE_CLASS_ID_VEHICLE) {
vehicle_count++;
num_rects++;
}
if (obj_meta->class_id == PGIE_CLASS_ID_PERSON) {
person_count++;
num_rects++;
}
//新加
obj_meta->rect_params.has_bg_color = 1;
obj_meta->rect_params.bg_color.red = 0.0;
obj_meta->rect_params.bg_color.green = 0.0;
obj_meta->rect_params.bg_color.blue = 0.0;
obj_meta->rect_params.bg_color.alpha = 0.5;
//新加-------
}
结果如下,可以看到所有对象都被模糊成黑色,模糊颜色可以通过red,green,blue进行更改,模糊程度可以通过alpha进行更改,如果设成1,就是全黑的效果。
如果想模糊整帧,可以在osd_sink_pad_buffer_probe中添加如下代码。
display_meta = nvds_acquire_display_meta_from_pool(batch_meta);
NvOSD_RectParams* rect_params = &display_meta->rect_params[0];
display_meta->num_rects = 1;
rect_params->left = 0;
rect_params->top = 0;
rect_params->width = frame_meta->pipeline_width;
rect_params->height = frame_meta->pipeline_height;
printf("w:%f,h:%f\n", rect_params->width, rect_params->height);
rect_params->has_bg_color = 1;
rect_params->bg_color.red = 0.0;
rect_params->bg_color.green = 0.0;
rect_params->bg_color.blue = 0.0;
rect_params->bg_color.alpha = 1.0;
rect_params->border_width = 3;
rect_params->border_color = (NvOSD_ColorParams) {1.0, 1.0, 0.0, 1.0};
nvds_add_display_meta_to_frame(frame_meta, display_meta);
结果如下,这里有个小问题,整帧都被填成黑的,对象的标签还在显示,osd插件是开源的,这是因为,先画的矩形,后画的标签。
2. 通过dsexample插件
首先要重编dsexample插件,因为该插件默认不使用opencv。进入/opt/nvidia/deepstream/deepstream/sources/gst-plugins/gst-dsexample目录,按着readme重编dsexample插件,主要是以下几步:
$ sudo apt-get install libgstreamer-plugins-base1.0-dev libgstreamer1.0-dev \
libopencv-dev
# set WITH_OPENCV?=1 in makefile
$ make && make install
不用写代码,测试命令如下:
$ cd /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test1
$ gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! mux.sink_0 nvstreammux name=mux batch-size=1 width=1280 height=720 ! nvinfer config-file-path=./dstest1_pgie_config.txt ! nvvideoconvert nvbuf-memory-type=1 ! 'video/x-raw(memory:NVMM),format=RGBA' ! dsexample full-frame=FALSE blur-objects=TRUE ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=RGBA' ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=RGBA' ! nvdsosd ! nvvideoconvert ! nveglglessink
运行的结果如下:
源代码用的是opencv的高斯模糊, 当然也可以改成填充,导出sgie的输入tensor,发现也被填充了,所以这里的填充是对frame的buffer做in-place的更改。
// GaussianBlur(in_mat(crop_rect), in_mat(crop_rect), cv::Size(15,15), 4);
rectangle(in_mat, cv::Point(crop_rect_params->left, crop_rect_params->top),
cv::Point(crop_rect_params->left+crop_rect_params->width, crop_rect_params->top+crop_rect_params->height) , (255, 0, 0), -1) ;
重新编译后,新的测试结果如下: