Forget less,count better:a domain-incremental self-distillation learning benchmark for lifelong crowd counting  

在线阅读下载全文

作  者:Jiaqi GAO Jingqi LI Hongming SHAN Yanyun QU James ZWANG Fei-Yue WANG Junping ZHANG 

机构地区:[1]Shanghai Key Laboratory of Intelligent Information Processing,School of Computer Science,Fudan University,Shanghai 200433,China [2]Institute of Science and Technology for Brain-inspired Intelligence,Fudan University,Shanghai 200433,China [3]Shanghai Center for Brain Science and Brain-inspired Technology,Shanghai 201210,China [4]School of Information Science and Technology,Xiamen University,Xiamen 361005,China [5]College of Information Sciences and Technology,the Pennsylvania State University,University Park,PA 16802,USA [6]State Key Laboratory of Management and Control for Complex Systems,Institute of Automation,Chinese Academy of Sciences,Beijing 100190,China

出  处:《Frontiers of Information Technology & Electronic Engineering》2023年第2期187-202,共16页信息与电子工程前沿(英文版)

基  金:Project supported by the National Natural Science Foundation of China(Nos.62176059,62101136,and U1811463);the Shanghai Municipal Science and Technology Major Project(No.2018SHZDZX01);Zhangjiang Lab,the Shanghai Municipal of Science and Technology Project(No.20JC1419500);the Shanghai Sailing Program(No.21YF1402800);the Natural Science Foundation of Shanghai(No.21ZR1403600);the Shanghai Center for Brain Science and Brain-inspired Technology。

摘  要:Crowd counting has important applications in public safety and pandemic control.A robust and practical crowd counting system has to be capable of continuously learning with the newly incoming domain data in real-world scenarios instead of fitting one domain only.Off-the-shelf methods have some drawbacks when handling multiple domains:(1)the models will achieve limited performance(even drop dramatically)among old domains after training images from new domains due to the discrepancies in intrinsic data distributions from various domains,which is called catastrophic forgetting;(2)the well-trained model in a specific domain achieves imperfect performance among other unseen domains because of domain shift;(3)it leads to linearly increasing storage overhead,either mixing all the data for training or simply training dozens of separate models for different domains when new ones are available.To overcome these issues,we investigate a new crowd counting task in incremental domain training setting called lifelong crowd counting.Its goal is to alleviate catastrophic forgetting and improve the generalization ability using a single model updated by the incremental domains.Specifically,we propose a self-distillation learning framework as a benchmark(forget less,count better,or FLCB)for lifelong crowd counting,which helps the model leverage previous meaningful knowledge in a sustainable manner for better crowd counting to mitigate the forgetting when new data arrive.A new quantitative metric,normalized Backward Transfer(nBwT),is developed to evaluate the forgetting degree of the model in the lifelong learning process.Extensive experimental results demonstrate the superiority of our proposed benchmark in achieving a low catastrophic forgetting degree and strong generalization ability.

关 键 词:Crowd counting Knowledge distillation Lifelong learning 

分 类 号:TP18[自动化与计算机技术—控制理论与控制工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象