Abstract:In order to improve the utilization of hardware resources of driverless vehicle perception system, a multi-task driverless vehicle perception algorithm based on feature fusion is constructed. The improved CSPDarknet53 is used as the backbone network of the model, and multi-scale features are extracted and fused by constructing feature fusion network and feature fusion module. The detection of 7 common road objects and pixel-level segmentation of the driving area are taken as examples. Multi-task DaSNet (Detection and Segmentation Net) is designed for training and testing. In order to compare model performance, BDD100K data set is used to train YOLOv5s, Faster R-CNN and U-NET models, and comparative analysis is made on mAP, Dice coefficient and detection speed and other performance indicators. The results showed that DaSNet multi-task model′s mAP value is 0.5% and 4.2% higher than YOLOv5s and Faster RCNN, respectively, and the detection speed of 121FPS can be achieved on RTX2080Ti GPU. Compared with U-NET network, Dice value of segmentation in priority and non-priority drivable are 4.4% and 6.8% higher, showing an obvious improvement.
孙传龙, 赵红, 崔翔宇, 牟亮, 徐福良, 路来伟. 基于特征融合的无人驾驶多任务感知算法[J]. 复杂系统与复杂性科学, 2023, 20(3): 103-110.
SUN Chuanlong, ZHAO Hong, CUI Xiangyu, MU Liang, XU Fuliang, LU Laiwei. Multi-task Sensing Algorithm for Driverless Vehicle Based on Feature Fusion. Complex Systems and Complexity Science, 2023, 20(3): 103-110.
[1] 王俊. 无人驾驶车辆环境感知系统关键技术研究[D]. 合肥:中国科学技术大学, 2016. WANG J. Research on keytechnologies of environment awareness system for unmanned vehicle [D]. Hefei: University of Science and Technology of China, 2016. [2] 王世峰, 戴祥, 徐宁, 等. 无人驾驶汽车环境感知技术综述[J]. 长春理工大学学报(自然科学版), 2017,40(1): 16. WANG S F, DAI X, XU N, et al. Overview on environment perception technology for unmanned ground vehicle[J]. Journal of Changchun University of Science and Technology (Natural Science Edition), 2017,40(1): 16. [3] CHEN Q, XIE Y, GUO S, et al. Sensingsystem of environmental perception technologies for driverless vehicle: a review of state of the art and challenges[J]. Sensors and Actuators A Physical, 2021, 319: 112566. [4] GAYATHRIK D, MAMATA R, NGUYEN T D L. Artificial intelligence trends for data analytics using machine learning and deep learning approaches[M]. USA: Calabasas: CRC Press, 2020. [5] GIRSHICK R, DONAHUE J, DARRELL T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C] //IEEE Conference on Computer Vision and Pattern Recognition. Columbus: IEEE, 2014: 580587. [6] GIRSHICK R. Fast r-cnn [C] // IEEE International Conference on Computer Vision. Santiago: IEEE, 2015: 14401448. [7] REN S, HE K, GIRSHICK R, et al. Faster R-CNN:towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2017, 39(6): 11371149. [8] REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once:unified, real-time object detection[C] // IEEE conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016: 779788. [9] LIU S, QI X, SHI J, et al. Multi-scalepatch aggregation (MPA) for simultaneous detection and segmentation[C] // 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Seattle :IEEE, 2016: 31413149. [10] PINHEIRO P, COLLOBERT R, DOLLAR P. Learning tosegments objects candidates[C] // Advances in Neural Information Processing Systems. Montreal:NIPS,2015. [11] CHEN L C, PAPANDREOU G, KOKKINOS I, et al. Semanticimage segmentation with deep convolutional nets and fully connected CRFs[J]. Computer Science, 2014(4): 357361. [12] ZHAO H SH, QI X J, SHEN X Y, et al. ICNet forreal-time semantic segmentation on high-resolution images[J]. Lecture Notes in Computer Science, 2017, 11207: 418434. [13] RONNEBERGER O, FISCHER P, BROX T. U-net: convolutional networks for biomedical image segmentation[J]. Lecture Notes in Computer Science, 2015, 9351: 234241. [14] 李亚. 多任务学习的研究[D].合肥:中国科学技术大学, 2018. LI Y. Research on multi-task learning [D]. Hefei: University of Science and Technology of China, 2018. [15] LIU S H, QI L, QIN H F, et al. Path aggregation network for instance segmentation[J/OL]. IEEE.[20211001]. DOI:10.1109/CVPR.2018.00913. [16] REZATOFIGHI H, TSOI N, J Y GWAK, et al. Generalized intersection over union: a metric and a loss for bounding box regression[C] // 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach:IEEE, 2019: 658666.