检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:Yifan Wu Kun Wang Congqiao Li Huilin Qu Jingya Zhu 吴佚凡;王坤;李聪乔;曲慧麟;朱经亚(College of Science,University of Shanghai for Science and Technology,Shanghai 200093,China;School of Physics and State Key Laboratory of Nuclear Physics and Technology,Peking University,Beijing 100871,China;CERN,EP Department,CH-1211 Geneva 23,Switzerland;School of Physics and Electronics,Henan University,Kaifeng 475004,China)
机构地区:[1]College of Science,University of Shanghai for Science and Technology,Shanghai 200093,China [2]School of Physics and State Key Laboratory of Nuclear Physics and Technology,Peking University,Beijing 100871,China [3]CERN,EP Department,CH-1211 Geneva 23,Switzerland [4]School of Physics and Electronics,Henan University,Kaifeng 475004,China
出 处:《Chinese Physics C》2025年第1期164-176,共13页中国物理C(英文版)
基 金:Supported by the National Natural Science Foundation of China(12275066,11605123)。
摘 要:In this paper,we introduce the More-Interaction Particle Transformer(MIParT),a novel deep-learning neural network designed for jet tagging.This framework incorporates our own design,the More-Interaction Attention(MIA)mechanism,which increases the dimensionality of particle interaction embeddings.We tested MIParT using the top tagging and quark-gluon datasets.Our results show that MIParT not only matches the accuracy and AUC of LorentzNet and a series of Lorentz-equivariant methods,but also significantly outperforms the ParT model in background rejection.Specifically,it improves background rejection by approximately 25% with a signal efficiency of 30% on the top tagging dataset and by 3% on the quark-gluon dataset.Additionally,MIParT requires only 30% of the parameters and 53% of the computational complexity needed by ParT,proving that high performance can be achieved with reduced model complexity.For very large datasets,we double the dimension of particle embeddings,referring to this variant as MIParT-Large(MIParT-L).We found that MIParT-L can further capitalize on the knowledge from large datasets.From a model pre-trained on the 100M JetClass dataset,the background rejection performance of fine-tuned MIParT-L improves by 39% on the top tagging dataset and by 6% on the quark-gluon dataset,surpassing that of fine-tuned ParT.Specifically,the background rejection of fine-tuned MIParT-L improves by an additional 2% compared to that of fine-tuned ParT.These results suggest that MIParT has the potential to increase the efficiency of benchmarks for jet tagging and event identification in particle physics.
关 键 词:jet tagging collider physics machine learning
分 类 号:O57[理学—粒子物理与原子核物理]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:18.191.240.94