Enhancing Communication Accessibility:UrSL-CNN Approach to Urdu Sign Language Translation for Hearing-Impaired Individuals  

在线阅读下载全文

作  者:Khushal Das Fazeel Abid Jawad Rasheed Kamlish Tunc Asuroglu Shtwai Alsubai Safeeullah Soomro 

机构地区:[1]Department of Computer Engineering,Modeling Electronics and Systems Engineering,University of Calabria,Rende Cosenza,87036,Italy [2]Department of Information Systems,University of Management and Technology,Lahore,54770,Pakistan [3]Department of Computer Engineering,Istanbul Sabahattin Zaim University,Istanbul,34303,Turkey [4]Department of Software Engineering,Istanbul Nisantasi University,Istanbul,34398,Turkey [5]Department of Computer Science,COMSATS University Islamabad,Lahore Campus,Lahore,54700,Pakistan [6]Faculty of Medicine and Health Technology,Tampere University,Tampere,33720,Finland [7]Department of Computer Science,College of Computer Engineering and Sciences in Al-Kharj,Prince Sattam Bin Abdulaziz University,P.O.Box 151,Al-Kharj,11942,Saudi Arabia [8]Second Department of Computer Science,College of Engineering and Computing,George Mason University,Fairfax,VA 4418,USA

出  处:《Computer Modeling in Engineering & Sciences》2024年第10期689-711,共23页工程与科学中的计算机建模(英文)

摘  要:Deaf people or people facing hearing issues can communicate using sign language(SL),a visual language.Many works based on rich source language have been proposed;however,the work using poor resource language is still lacking.Unlike other SLs,the visuals of the Urdu Language are different.This study presents a novel approach to translating Urdu sign language(UrSL)using the UrSL-CNN model,a convolutional neural network(CNN)architecture specifically designed for this purpose.Unlike existingworks that primarily focus on languageswith rich resources,this study addresses the challenge of translating a sign language with limited resources.We conducted experiments using two datasets containing 1500 and 78,000 images,employing a methodology comprising four modules:data collection,pre-processing,categorization,and prediction.To enhance prediction accuracy,each sign image was transformed into a greyscale image and underwent noise filtering.Comparative analysis with machine learning baseline methods(support vectormachine,GaussianNaive Bayes,randomforest,and k-nearest neighbors’algorithm)on the UrSL alphabets dataset demonstrated the superiority of UrSL-CNN,achieving an accuracy of 0.95.Additionally,our model exhibited superior performance in Precision,Recall,and F1-score evaluations.This work not only contributes to advancing sign language translation but also holds promise for improving communication accessibility for individuals with hearing impairments.

关 键 词:Convolutional neural networks Pakistan sign language visual language 

分 类 号:TP391.1[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象