检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:Zhongyu Yang Chen Shen Wei Shao Tengfei Xing Runbo Hu Pengfei Xu Hua Chai Ruini Xue
机构地区:[1]School of Computer Science and Engineering,University of Electronic Science and Technology of China,Chengdu 611731,China [2]Didi Chuxing,Beijing 100081,China
出 处:《Computational Visual Media》2024年第4期753-769,共17页计算可视媒体(英文版)
基 金:supported by the National Natural Science Foundation of China(No.U23A6007).
摘 要:Despite recent advances in lane detection methods,scenarios with limited-or no-visual-clue of lanes due to factors such as lighting conditions and occlusion remain challenging and crucial for automated driving.Moreover,current lane representations require complex post-processing and struggle with specific instances.Inspired by the DETR architecture,we propose LDTR,a transformer-based model to address these issues.Lanes are modeled with a novel anchorchain,regarding a lane as a whole from the beginning,which enables LDTR to handle special lanes inherently.To enhance lane instance perception,LDTR incorporates a novel multi-referenced deformable attention module to distribute attention around the object.Additionally,LDTR incorporates two line IoU algorithms to improve convergence efficiency and employs a Gaussian heatmap auxiliary branch to enhance model representation capability during training.To evaluate lane detection models,we rely on Fr´echet distance,parameterized F1-score,and additional synthetic metrics.Experimental results demonstrate that LDTR achieves state-of-the-art performance on well-known datasets.
关 键 词:TRANSFORMER lane detection anchor-chain
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.7