检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:Yuewei Wu Ruiling Fu Tongtong Xing Fulian Yin
机构地区:[1]College of Information and Communication Engineering,Communication University of China,Beijing,100024,China [2]State Key Laboratory of Media Convergence and Communication,Communication University of China,Beijing,100024,China
出 处:《Computers, Materials & Continua》2025年第1期751-775,共25页计算机、材料和连续体(英文)
基 金:supported by the National Key Research and Development Program of China(Grant numbers:2021YFF0901705,2021YFF0901700);the State Key Laboratory of Media Convergence and Communication,Communication University of China;the Fundamental Research Funds for the Central Universities;the High-Quality and Cutting-Edge Disciplines Construction Project for Universities in Beijing(Internet Information,Communication University of China).
摘 要:In the Internet era,recommendation systems play a crucial role in helping users find relevant information from large datasets.Class imbalance is known to severely affect data quality,and therefore reduce the performance of recommendation systems.Due to the imbalance,machine learning algorithms tend to classify inputs into the positive(majority)class every time to achieve high prediction accuracy.Imbalance can be categorized such as by features and classes,but most studies consider only class imbalance.In this paper,we propose a recommendation system that can integrate multiple networks to adapt to a large number of imbalanced features and can deal with highly skewed and imbalanced datasets through a loss function.We propose a loss aware feature attention mechanism(LAFAM)to solve the issue of feature imbalance.The network incorporates an attention mechanism and uses multiple sub-networks to classify and learn features.For better results,the network can learn the weights of sub-networks and assign higher weights to important features.We propose suppression loss to address class imbalance,which favors negative loss by penalizing positive loss,and pays more attention to sample points near the decision boundary.Experiments on two large-scale datasets verify that the performance of the proposed system is greatly improved compared to baseline methods.
关 键 词:Imbalanced data deep learning e-commerce recommendation loss function big data analysis
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.179