A Recurrent Neural Network for Multimodal Anomaly Detection by Using Spatio-Temporal Audio-Visual Data  

在线阅读下载全文

作  者:Sameema Tariq Ata-Ur-Rehman Maria Abubakar Waseem Iqbal Hatoon S.Alsagri Yousef A.Alduraywish Haya Abdullah AAlhakbani 

机构地区:[1]Department of Electrical Engineering,University of Engineering and Technology,Lahore,54890,Pakistan [2]Department of Electrical Engineering,National University of Science and Technology,National University of Sciences and Technology,Islamabad,24090,Pakistan [3]Department of Business and Computing,Ravensbourne University London,Ravensbourne University,London,SE100EW,England [4]Electrical and Computer Engineering Department,College of Engineering,Sultan Qaboos University,Muscat,123,Oman [5]College of Computer and Information Sciences,Imam Mohammad Ibn Saud Islamic University(IMSIU),Riyadh,11673,Saudi Arabia

出  处:《Computers, Materials & Continua》2024年第11期2493-2515,共23页计算机、材料和连续体(英文)

基  金:supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(grant number IMSIU-RG23148).

摘  要:In video surveillance,anomaly detection requires training machine learning models on spatio-temporal video sequences.However,sometimes the video-only data is not sufficient to accurately detect all the abnormal activities.Therefore,we propose a novel audio-visual spatiotemporal autoencoder specifically designed to detect anomalies for video surveillance by utilizing audio data along with video data.This paper presents a competitive approach to a multi-modal recurrent neural network for anomaly detection that combines separate spatial and temporal autoencoders to leverage both spatial and temporal features in audio-visual data.The proposed model is trained to produce low reconstruction error for normal data and high error for abnormal data,effectively distinguishing between the two and assigning an anomaly score.Training is conducted on normal datasets,while testing is performed on both normal and anomalous datasets.The anomaly scores from the models are combined using a late fusion technique,and a deep dense layer model is trained to produce decisive scores indicating whether a sequence is normal or anomalous.The model’s performance is evaluated on the University of California,San Diego Pedestrian 2(UCSD PED 2),University of Minnesota(UMN),and Tampere University of Technology(TUT)Rare Sound Events datasets using six evaluation metrics.It is compared with state-of-the-art methods depicting a high Area Under Curve(AUC)and a low Equal Error Rate(EER),achieving an(AUC)of 93.1 and an(EER)of 8.1 for the(UCSD)dataset,and an(AUC)of 94.9 and an(EER)of 5.9 for the UMN dataset.The evaluations demonstrate that the joint results from the combined audio-visual model outperform those from separate models,highlighting the competitive advantage of the proposed multi-modal approach.

关 键 词:Acoustic-visual anomaly detection sequence-to-sequence autoencoder reconstruction error late fusion regularity score 

分 类 号:TP183[自动化与计算机技术—控制理论与控制工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象