Drone-Based Public Surveillance Using 3D Point Clouds and Neuro-Fuzzy Classifier  

作  者:Yawar Abbas Aisha Ahmed Alarfaj Ebtisam Abdullah Alabdulqader Asaad Algarni Ahmad Jalal Hui Liu 

机构地区:[1]Faculty of Computing and AI,Air University,Islamabad,44000,Pakistan [2]Department of Information Systems,College of Computer and Information Sciences,Princess Nourah bint Abdulrahman University,Riyadh,11671,Saudi Arabia [3]Department of Information Technology,College of Computer and Information Sciences,King Saud University,Riyadh,12372,Saudi Arabia [4]Department of Computer Sciences,Faculty of Computing and Information Technology,Northern Border University,Rafha,91911,Saudi Arabia [5]Department of Computer Science and Engineering,College of Informatics,Korea University,Seoul,02841,Republic of Korea [6]Cognitive Systems Lab,University of Bremen,Bremen,28359,Germany

出  处:《Computers, Materials & Continua》2025年第3期4759-4776,共18页计算机、材料和连续体(英文)

基  金:funded by the Open Access Initiative of the University of Bremen and the DFG via SuUB Bremen.Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2024R348),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.

摘  要:Human Activity Recognition(HAR)in drone-captured videos has become popular because of the interest in various fields such as video surveillance,sports analysis,and human-robot interaction.However,recognizing actions from such videos poses the following challenges:variations of human motion,the complexity of backdrops,motion blurs,occlusions,and restricted camera angles.This research presents a human activity recognition system to address these challenges by working with drones’red-green-blue(RGB)videos.The first step in the proposed system involves partitioning videos into frames and then using bilateral filtering to improve the quality of object foregrounds while reducing background interference before converting from RGB to grayscale images.The YOLO(You Only Look Once)algorithm detects and extracts humans from each frame,obtaining their skeletons for further processing.The joint angles,displacement and velocity,histogram of oriented gradients(HOG),3D points,and geodesic Distance are included.These features are optimized using Quadratic Discriminant Analysis(QDA)and utilized in a Neuro-Fuzzy Classifier(NFC)for activity classification.Real-world evaluations on the Drone-Action,Unmanned Aerial Vehicle(UAV)-Gesture,and Okutama-Action datasets substantiate the proposed system’s superiority in accuracy rates over existing methods.In particular,the system obtains recognition rates of 93%for drone action,97%for UAV gestures,and 81%for Okutama-action,demonstrating the system’s reliability and ability to learn human activity from drone videos.

关 键 词:Activity recognition geodesic distance pattern recognition neuro fuzzy classifier 

分 类 号:TP3[自动化与计算机技术—计算机科学与技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象