检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:Yahui Liu Bin Tian Yisheng Lv Lingxi Li Fei-Yue Wang
机构地区:[1]the State Key Laboratory for Management and Control of Complex Systems,Institute of Automation,Chinese Academy of Sciences,Beijing 100190 [2]the School of Artificial Intelligence,University of Chinese Academy of Sciences,Beijing 100190 [3]IEEE [4]the Transportation and Autonomous Systems Institute(TASI)and the Department of Electrical and Computer Engineering,Purdue School of Engineering and Technology,Indiana University-Purdue University Indianapolis(IUPUI),Indianapolis 46202 USA
出 处:《IEEE/CAA Journal of Automatica Sinica》2024年第1期231-239,共9页自动化学报(英文版)
基 金:supported in part by the Nationa Natural Science Foundation of China (61876011);the National Key Research and Development Program of China (2022YFB4703700);the Key Research and Development Program 2020 of Guangzhou (202007050002);the Key-Area Research and Development Program of Guangdong Province (2020B090921003)。
摘 要:Recently, there have been some attempts of Transformer in 3D point cloud classification. In order to reduce computations, most existing methods focus on local spatial attention,but ignore their content and fail to establish relationships between distant but relevant points. To overcome the limitation of local spatial attention, we propose a point content-based Transformer architecture, called PointConT for short. It exploits the locality of points in the feature space(content-based), which clusters the sampled points with similar features into the same class and computes the self-attention within each class, thus enabling an effective trade-off between capturing long-range dependencies and computational complexity. We further introduce an inception feature aggregator for point cloud classification, which uses parallel structures to aggregate high-frequency and low-frequency information in each branch separately. Extensive experiments show that our PointConT model achieves a remarkable performance on point cloud shape classification. Especially, our method exhibits 90.3% Top-1 accuracy on the hardest setting of ScanObjectN N. Source code of this paper is available at https://github.com/yahuiliu99/PointC onT.
关 键 词:Content-based Transformer deep learning feature aggregator local attention point cloud classification
分 类 号:TP391.41[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:18.188.54.133