机构地区:[1]南京航空航天大学自动化学院,南京211106
出 处:《中国图象图形学报》2017年第7期935-945,共11页Journal of Image and Graphics
基 金:国家自然科学基金项目(U1531110);中央高校基本科研业务费专项基金项目(NZ2015202)~~
摘 要:目的目前,特征点轨迹稳像算法无法兼顾轨迹长度、鲁棒性及轨迹利用率,因此容易造成该类算法的视频稳像结果扭曲失真或者局部不稳。针对此问题,提出基于三焦点张量重投影的特征点轨迹稳像算法。方法利用三焦点张量构建长虚拟轨迹,通过平滑虚拟轨迹定义稳定视图,然后利用三焦点张量将实特征点重投影到稳定视图,以此实现实特征点轨迹的平滑,最后利用网格变形生成稳定帧。结果对大量不同类型的视频进行稳像效果测试,并且与典型的特征点轨迹稳像算法以及商业软件进行稳像效果对比,其中包括基于轨迹增长的稳像算法、基于对极几何点转移的稳像算法以及商业软件Warp Stabilizer。本文算法的轨迹长度要求低、轨迹利用率高以及鲁棒性好,对于92%剧烈抖动的视频,稳像效果优于基于轨迹增长的稳像算法;对于93%缺乏长轨迹的视频以及71.4%存在滚动快门失真的视频,稳像效果优于Warp Stabilizer;而与基于对极几何点转移的稳像算法相比,退化情况更少,可避免摄像机阶段性静止、摄像机纯旋转等情况带来的算法失效问题。结论本文算法对摄像机运动模式和场景深度限制少,不仅适宜处理缺少视差、场景结构非平面、滚动快门失真等常见的视频稳像问题,而且在摄像机摇头、运动模糊、剧烈抖动等长轨迹缺乏的情况下,依然能取得较好的稳像效果,但该算法的时间性能还有所不足。Objective Video stabilization is one of the key research areas of computer vision. Currently, the three major categories of video stabilization algorithms are 2D global motion, 2D local motion, and feature trajectory stabilization. The 2D global and local motion stabilization algorithms usually cannot achieve a satisfying stabilization result in scenes with nonplanar depth variations. By contrast, the feature trajectory stabilization algorithm handles nonplanar depth variations well in the aforementioned scenes and outperforms the two video stabilization algorithms. However, the feature trajectory stabilization algorithm normally suffers from stabilization output distortion and unstable local result because of its drawbacks in the trajectory length, robustness, and trajectory utilization rate. To solve this problem, this paper proposes a feature trajectory stabilization algorithm using trifocal tensor. Method This algorithm extracts real feature point trajectory in the scene of the video with KLT algorithm and leverages the RANSAC algorithm to eliminate mismatches in the tracking feature point. The algorithm then adaptively selects a segment of the real trajectories to initialize the virtual trajectories based on the length of real trajectories. A long virtual trajectory is constructed by applying a trifocal tensor transfer to extend the initial virtual trajectory. This virtual trajectory extending process stops when either the virtual trajectory exceeds half of the frame width or height, or the difference between the mean and median of transferred points is larger than five pixels. When the number of virtual trajectories through one frame is less than 300, new initial virtual trajectories are added using the real trajectories on the same frame. With the acquired long trajectories, the algorithm odd-extends the beginning of the virtual trajectories to the first frame and the ending of the virtual trajectories to the last frame. The stabilized view is defined by the smoothed virtual trajectories from the output of
分 类 号:TP391.41[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...