基于Transformer残差网络的事件重建算法  

Event reconstruction algorithm based on Transformer residual network

在线阅读下载全文

作  者:王立喜 刘云平[1] 汤琴琴 李家豪 Wang Lixi;Liu Yunping;Tang Qinqin;Li Jiahao(School of Automation,Nanjing University of Information Science&Technology,Nanjing 210016,China;School of Rail Transportation,Wuxi University,Wuxi 214015,China)

机构地区:[1]南京信息工程大学自动化学院,江苏南京210016 [2]无锡学院轨道交通学院,江苏无锡214015

出  处:《电子技术应用》2024年第11期28-34,共7页Application of Electronic Technique

摘  要:目前的人工视觉系统仍然无法处理一些涉及高速运动场景和高动态范围的真实世界场景。事件相机因其低延迟和高动态范围捕捉高速运动的优势具有消除上述问题的能力。然而,由于事件数据的高度稀疏和变化性质,在保证其快速性的同时将事件重建为视频仍然具有挑战性。因此提出了一种基于Transformer残差网络和光流估计的事件流重建算法,通过光流估计和事件重建的联合训练,实现自监督的重建过程,并引入去模糊预处理和亚像素上采样模块来提高重建质量。实验结果表明,在公开数据集上,提出的方法可以有效提高事件流的重建效果。Current artificial visual systems still struggle to handle real-world scenarios involving high-speed motion and high dy‐namic range scenes.Event cameras have the capability to address these challenges due to their low latency and high dynamic range for capturing fast-moving objects.However,reconstructing events into videos while maintaining their speed presents a chal‐lenge due to the highly sparse and dynamic nature of event data.Therefore,this paper proposes an event stream reconstruction al‐gorithm based on Transformer residual networks and optical flow estimation.By jointly training optical flow estimation and event reconstruction,a self-supervised reconstruction process has been achieved.Additionally,deblurring preprocessing and subpixel upsampling modules are introduced to enhance the quality of reconstruction.Experimental results demonstrate that the proposed approach effectively improves the reconstruction quality of event streams on public datasets.

关 键 词:事件相机 视频重建 深度学习 光流估计 

分 类 号:TP193.41[自动化与计算机技术—控制理论与控制工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象