检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:Sameema Tariq Ata-Ur-Rehman Maria Abubakar Waseem Iqbal Hatoon S.Alsagri Yousef A.Alduraywish Haya Abdullah AAlhakbani
机构地区:[1]Department of Electrical Engineering,University of Engineering and Technology,Lahore,54890,Pakistan [2]Department of Electrical Engineering,National University of Science and Technology,National University of Sciences and Technology,Islamabad,24090,Pakistan [3]Department of Business and Computing,Ravensbourne University London,Ravensbourne University,London,SE100EW,England [4]Electrical and Computer Engineering Department,College of Engineering,Sultan Qaboos University,Muscat,123,Oman [5]College of Computer and Information Sciences,Imam Mohammad Ibn Saud Islamic University(IMSIU),Riyadh,11673,Saudi Arabia
出 处:《Computers, Materials & Continua》2024年第11期2493-2515,共23页计算机、材料和连续体(英文)
基 金:supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(grant number IMSIU-RG23148).
摘 要:In video surveillance,anomaly detection requires training machine learning models on spatio-temporal video sequences.However,sometimes the video-only data is not sufficient to accurately detect all the abnormal activities.Therefore,we propose a novel audio-visual spatiotemporal autoencoder specifically designed to detect anomalies for video surveillance by utilizing audio data along with video data.This paper presents a competitive approach to a multi-modal recurrent neural network for anomaly detection that combines separate spatial and temporal autoencoders to leverage both spatial and temporal features in audio-visual data.The proposed model is trained to produce low reconstruction error for normal data and high error for abnormal data,effectively distinguishing between the two and assigning an anomaly score.Training is conducted on normal datasets,while testing is performed on both normal and anomalous datasets.The anomaly scores from the models are combined using a late fusion technique,and a deep dense layer model is trained to produce decisive scores indicating whether a sequence is normal or anomalous.The model’s performance is evaluated on the University of California,San Diego Pedestrian 2(UCSD PED 2),University of Minnesota(UMN),and Tampere University of Technology(TUT)Rare Sound Events datasets using six evaluation metrics.It is compared with state-of-the-art methods depicting a high Area Under Curve(AUC)and a low Equal Error Rate(EER),achieving an(AUC)of 93.1 and an(EER)of 8.1 for the(UCSD)dataset,and an(AUC)of 94.9 and an(EER)of 5.9 for the UMN dataset.The evaluations demonstrate that the joint results from the combined audio-visual model outperform those from separate models,highlighting the competitive advantage of the proposed multi-modal approach.
关 键 词:Acoustic-visual anomaly detection sequence-to-sequence autoencoder reconstruction error late fusion regularity score
分 类 号:TP183[自动化与计算机技术—控制理论与控制工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.7