ALCTS—An Assistive Learning and Communicative Tool for Speech and Hearing Impaired Students  

在线阅读下载全文

作  者:Shabana Ziyad Puthu Vedu Wafaa A.Ghonaim Naglaa M.Mostafa Pradeep Kumar Singh 

机构地区:[1]Computer Science Department,College of Computer Engineering and Sciences,Prince Sattam bin Abdulaziz University,Al Kharj,11942,Saudi Arabia [2]Faculty of Science,Al-Azhar University,Cairo,12111,Egypt [3]Department of Mathematics,Faculty of Science,Al-Azhar University(Girl’s Branch),Cairo,12111,Egypt [4]Department of Computer Science and Engineering,Central University of Jammu,Jammu and Kashmir,181143,India

出  处:《Computers, Materials & Continua》2025年第5期2599-2617,共19页计算机、材料和连续体(英文)

基  金:sponsored by Prince Sattam Bin Abdulaziz University(PSAU)as part of funding for its SDG Roadmap Research Funding Programme project number PSAU-2023-SDG-2023/SDG/31.

摘  要:Hearing and Speech impairment can be congenital or acquired.Hearing and speech-impaired students often hesitate to pursue higher education in reputable institutions due to their challenges.However,the development of automated assistive learning tools within the educational field has empowered disabled students to pursue higher education in any field of study.Assistive learning devices enable students to access institutional resources and facilities fully.The proposed assistive learning and communication tool allows hearing and speech-impaired students to interact productively with their teachers and classmates.This tool converts the audio signals into sign language videos for the speech and hearing-impaired to follow and converts the sign language to text format for the teachers to follow.This educational tool for the speech and hearing-impaired is implemented by customized deep learning models such as Convolution neural networks(CNN),Residual neural Networks(ResNet),and stacked Long short-term memory(LSTM)network models.This assistive learning tool is a novel framework that interprets the static and dynamic gesture actions in American Sign Language(ASL).Such communicative tools empower the speech and hearing impaired to communicate effectively in a classroom environment and foster inclusivity.Customized deep learning models were developed and experimentally evaluated with the standard performance metrics.The model exhibits an accuracy of 99.7% for all static gesture classification and 99% for specific vocabulary of gesture action words.This two-way communicative and educational tool encourages social inclusion and a promising career for disabled students.

关 键 词:Sign language recognition system ASL dynamic gestures facial key points CNN LSTM ResNet 

分 类 号:TP391.4[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象