Please wait a minute...
文章检索
复杂系统与复杂性科学  2023, Vol. 20 Issue (3): 103-110    DOI: 10.13306/j.1672-3813.2023.03.014
  本期目录 | 过刊浏览 | 高级检索 |
基于特征融合的无人驾驶多任务感知算法
孙传龙1, 赵红1, 崔翔宇2, 牟亮1, 徐福良1, 路来伟1
1.青岛大学机电工程学院,山东 青岛 266071;
2.海信崂山研发中心,山东 青岛 266104
Multi-task Sensing Algorithm for Driverless Vehicle Based on Feature Fusion
SUN Chuanlong1, ZHAO Hong1, CUI Xiangyu2, MU Liang1, XU Fuliang1, LU Laiwei1
1. College of Mechanical and Electrical Engineering ,Qingdao University, Qingdao 266071, China;
2. Hisense Laoshan R&D Center, Qingdao 266104, China
全文: PDF(4909 KB)  
输出: BibTeX | EndNote (RIS)      
摘要 为提高无人驾驶汽车感知系统硬件资源利用率,构建了一种基于特征融合的无人驾驶多任务感知算法。采用改进的CSPDarknet53作为模型的主干网络,通过构建特征融合网络与特征融合模块对多尺度特征进行提取与融合,并以7种常见道路物体的检测与可行驶区域的像素级分割两任务为例,设计多任务模型DaSNet(Detection and Segmentation Net)进行训练与测试。使用BDD100K数据集对YOLOv5s、Faster R-CNN以及U-Net模型进行训练,并对mAP、Dice系数以及检测速度等性能指标做出对比分析。研究结果表明:DaSNet多任务模型在道路物体检测任务上,mAP值分别比YOLOv5s和Faster RCNN高出0.5%和4.2%,在RTX2080Ti GPU上达到121FPS的检测速度;在占优先权与不占优先权的可行驶区域上分割的Dice值相较于U-Net网络分别高出了4.4%与6.8%,有较明显的提升。
服务
把本文推荐给朋友
加入引用管理器
E-mail Alert
RSS
作者相关文章
孙传龙
赵红
崔翔宇
牟亮
徐福良
路来伟
关键词 无人驾驶多任务特征融合道路物体检测可行驶区域分割    
Abstract:In order to improve the utilization of hardware resources of driverless vehicle perception system, a multi-task driverless vehicle perception algorithm based on feature fusion is constructed. The improved CSPDarknet53 is used as the backbone network of the model, and multi-scale features are extracted and fused by constructing feature fusion network and feature fusion module. The detection of 7 common road objects and pixel-level segmentation of the driving area are taken as examples. Multi-task DaSNet (Detection and Segmentation Net) is designed for training and testing. In order to compare model performance, BDD100K data set is used to train YOLOv5s, Faster R-CNN and U-NET models, and comparative analysis is made on mAP, Dice coefficient and detection speed and other performance indicators. The results showed that DaSNet multi-task model′s mAP value is 0.5% and 4.2% higher than YOLOv5s and Faster RCNN, respectively, and the detection speed of 121FPS can be achieved on RTX2080Ti GPU. Compared with U-NET network, Dice value of segmentation in priority and non-priority drivable are 4.4% and 6.8% higher, showing an obvious improvement.
Key wordsdriverless vehicle    multi-task    fature fusion    road object dection    driveable area segmentation
收稿日期: 2021-11-07      出版日期: 2023-10-08
ZTFLH:  TP391  
基金资助:山东省重点研发计划(2018GGX105004);青岛市民生科技计划(196188nsh)
通讯作者: 赵红(1973),女,河南南阳人,博士,副教授,主要研究方向为车辆节能减排与新能源技术。   
作者简介: 孙传龙(1997),男,山东淄博人,硕士研究生,主要研究方向为深度学习与计算机视觉在无人驾驶中的应用。
引用本文:   
孙传龙, 赵红, 崔翔宇, 牟亮, 徐福良, 路来伟. 基于特征融合的无人驾驶多任务感知算法[J]. 复杂系统与复杂性科学, 2023, 20(3): 103-110.
SUN Chuanlong, ZHAO Hong, CUI Xiangyu, MU Liang, XU Fuliang, LU Laiwei. Multi-task Sensing Algorithm for Driverless Vehicle Based on Feature Fusion. Complex Systems and Complexity Science, 2023, 20(3): 103-110.
链接本文:  
https://fzkx.qdu.edu.cn/CN/10.13306/j.1672-3813.2023.03.014      或      https://fzkx.qdu.edu.cn/CN/Y2023/V20/I3/103
[1] 王俊. 无人驾驶车辆环境感知系统关键技术研究[D]. 合肥:中国科学技术大学, 2016.
WANG J. Research on keytechnologies of environment awareness system for unmanned vehicle [D]. Hefei: University of Science and Technology of China, 2016.
[2] 王世峰, 戴祥, 徐宁, 等. 无人驾驶汽车环境感知技术综述[J]. 长春理工大学学报(自然科学版), 2017,40(1): 16.
WANG S F, DAI X, XU N, et al. Overview on environment perception technology for unmanned ground vehicle[J]. Journal of Changchun University of Science and Technology (Natural Science Edition), 2017,40(1): 16.
[3] CHEN Q, XIE Y, GUO S, et al. Sensingsystem of environmental perception technologies for driverless vehicle: a review of state of the art and challenges[J]. Sensors and Actuators A Physical, 2021, 319: 112566.
[4] GAYATHRIK D, MAMATA R, NGUYEN T D L. Artificial intelligence trends for data analytics using machine learning and deep learning approaches[M]. USA: Calabasas: CRC Press, 2020.
[5] GIRSHICK R, DONAHUE J, DARRELL T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C] //IEEE Conference on Computer Vision and Pattern Recognition. Columbus: IEEE, 2014: 580587.
[6] GIRSHICK R. Fast r-cnn [C] // IEEE International Conference on Computer Vision. Santiago: IEEE, 2015: 14401448.
[7] REN S, HE K, GIRSHICK R, et al. Faster R-CNN:towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2017, 39(6): 11371149.
[8] REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once:unified, real-time object detection[C] // IEEE conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016: 779788.
[9] LIU S, QI X, SHI J, et al. Multi-scalepatch aggregation (MPA) for simultaneous detection and segmentation[C] // 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Seattle :IEEE, 2016: 31413149.
[10] PINHEIRO P, COLLOBERT R, DOLLAR P. Learning tosegments objects candidates[C] // Advances in Neural Information Processing Systems. Montreal:NIPS,2015.
[11] CHEN L C, PAPANDREOU G, KOKKINOS I, et al. Semanticimage segmentation with deep convolutional nets and fully connected CRFs[J]. Computer Science, 2014(4): 357361.
[12] ZHAO H SH, QI X J, SHEN X Y, et al. ICNet forreal-time semantic segmentation on high-resolution images[J]. Lecture Notes in Computer Science, 2017, 11207: 418434.
[13] RONNEBERGER O, FISCHER P, BROX T. U-net: convolutional networks for biomedical image segmentation[J]. Lecture Notes in Computer Science, 2015, 9351: 234241.
[14] 李亚. 多任务学习的研究[D].合肥:中国科学技术大学, 2018.
LI Y. Research on multi-task learning [D]. Hefei: University of Science and Technology of China, 2018.
[15] LIU S H, QI L, QIN H F, et al. Path aggregation network for instance segmentation[J/OL]. IEEE.[20211001]. DOI:10.1109/CVPR.2018.00913.
[16] REZATOFIGHI H, TSOI N, J Y GWAK, et al. Generalized intersection over union: a metric and a loss for bounding box regression[C] // 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach:IEEE, 2019: 658666.
[1] 朱懋昌, 宾晟, 孙更新. 基于COVID-19传播特性的传染病模型的构建与研究[J]. 复杂系统与复杂性科学, 2023, 20(2): 29-37.
[2] 郭淑慧, 吕欣. 网络直播大数据:统计特征与时序规律挖掘[J]. 复杂系统与复杂性科学, 2023, 20(2): 1-9.
[3] 王佳亮, 李海滨, 李海燕. 基于复杂网络的新冠病毒群体免疫数值仿真[J]. 复杂系统与复杂性科学, 2023, 20(1): 27-33.
[4] 张书谙, 王曦, 代继鹏, 隋毅, 孙仁诚. 基于关键词共现网络的主题词提取算法[J]. 复杂系统与复杂性科学, 2023, 20(1): 74-80.
[5] 王浩, 许小可. 融合文本和表情符号特征的社交网络用户性别识别[J]. 复杂系统与复杂性科学, 2022, 19(4): 17-24.
[6] 王一伊, 卜凡亮. 涉恐个体极端思想演化双阈值观点动力学模型[J]. 复杂系统与复杂性科学, 2022, 19(4): 55-63.
[7] 赵薇, 李建波, 吕志强, 董传浩. 融合时间和地理信息的兴趣点推荐研究[J]. 复杂系统与复杂性科学, 2022, 19(4): 25-31.
[8] 李军涛, 胡启贤, 刘朋飞, 郭文文. 跨层穿梭车双提升机系统多目标问题优化[J]. 复杂系统与复杂性科学, 2022, 19(4): 80-90.
[9] 李冯, 宾晟, 孙更新. 基于时变参数的SCUIR传播模型的构建与研究[J]. 复杂系统与复杂性科学, 2022, 19(2): 80-86.
[10] 胡亮, 肖人彬, 王英聪. 蜂群激发抑制算法及其在交通信号配时中的应用[J]. 复杂系统与复杂性科学, 2019, 16(2): 9-18.
[11] 刘琪, 肖人彬. 观点动力学视角下基于意见领袖的网络舆情反转研究[J]. 复杂系统与复杂性科学, 2019, 16(1): 1-13.
[12] 李甍娜, 郭进利, 卞闻, 常宁戈, 肖潇, 陆睿敏. 网络视角下的唐诗[J]. 复杂系统与复杂性科学, 2017, 14(4): 66-71.
[13] 杨晓波, 陈楚湘, 王至婉. 基于节点相似性的LFM社团发现算法[J]. 复杂系统与复杂性科学, 2017, 14(3): 85-90.
[14] 蒲玮, 李雄. 基于能力组件的作战仿真Agent模块化结构设计[J]. 复杂系统与复杂性科学, 2017, 14(3): 45-57.
[15] 崔琼, 李建华, 冉淏丹, 南明莉. 任务流驱动的指挥信息系统动态超网络模型[J]. 复杂系统与复杂性科学, 2017, 14(3): 58-67.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed