检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:衡玮 俞健 达飞鹏[1,2,3] Heng Wei;Yu Jian;Da Feipeng(School of Automation,Southeast University,Nanjing 210096,Jiangsu,China;Key Laboratory of Measurement and Control of Complex Systems of Engineering,Ministry of Education,Southeast University,Nanjing 210096,Jiangsu,China;Shenzhen Research Institute,Southeast University,Shenzhen 518063,Guangdong,China)
机构地区:[1]东南大学自动化学院,江苏南京210096 [2]东南大学复杂工程系统测量与控制教育部重点实验室,江苏南京210096 [3]东南大学深圳研究院,广东深圳518063
出 处:《光学学报》2023年第14期173-183,共11页Acta Optica Sinica
基 金:国家自然科学基金(51475092);江苏省前沿引领技术基础研究专项(BK20192004C)。
摘 要:针对宽基线场景下的拼接因视差导致伪影和瑕疵的问题,提出了一种基于密集视点插值的实时视频拼接方法。该方法采用在左右相机的基线上补充密集中间视点的方式,为拼接的重叠区域合成平滑过渡的插值视图,以更好地对齐多个输入。为生成该插值视图,利用立体匹配中的匹配代价,设计了网络用来预测在原视图中采样的像素位移场。所提方法在没有插值视图真值的情形下,利用视点间的空间变换关系,指引网络学习视图生成规则。实验结果表明,所提方法能提升视频图像拼接后的视觉观感,并可以达到实时性能,满足实际场景中的应用需求。Objective A video stitching method based on dense viewpoint interpolation is proposed to solve the problem of artifacts and defects caused by parallax when stitching under wide baseline scenes.Video stitching technology can facilitate access to a broader field of view and plays a vital role in security surveillance,intelligent driving,virtual reality,and video conferencing.One of the biggest challenges of the stitching task is the parallax.When the cameras′optical centers perfectly coincide,they are unaffected by parallax and can easily synthesize perfect images.However,achieving the complete coincidence of camera optical centers in practical applications is not easy.The cameras are also scattered in some scenes,such as vehiclemounted panoramic systems and wide field security surveillance systems.Therefore,it is important to study the problem of stitching in wide baseline scenes.A standard method uses a global homography matrix for alignment,but it has no parallax processing capability,which results in obvious flaws in wide baseline and large parallax scenes.In order to solve the above problems,many researchers have proposed corresponding solutions from the perspectives of multiple homography and mesh optimization.However,the mesh deformation may have significant shape distortion.Some deep learning methods combine vision tasks of optical flow,semantic alignment,image fusion,and image reconstruction to help deal with the stitching problem.However,the parameter information of cameras is not fully utilized,so the stitching results sometimes still show defects.Therefore,we wish to make full use of the parameter information of cameras and synthesize the smooth interpolated view by supplementing intermediate viewpoints between cameras to achieve better visual perception.Methods The present study proposes a realtime video stitching method based on dense viewpoint interpolation.The method focuses on the overlapping regions of stitching and synthesizes the smooth interpolated view by supplementing dense intermediate vie
分 类 号:TP391[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.170