机构地区:[1]School of Computer and Information Technology,Beijing Jiaotong University,Beijing 100044,China [2]Beijing Key Lab of Traffic Data Analysis and Mining,Beijing Jiaotong University,Beijing 100044,China
出 处:《Science China(Information Sciences)》2024年第8期111-125,共15页中国科学(信息科学)(英文版)
基 金:partly supported by National Natural Science Foundation of China(Grant No.62176020);National Key Research and Development Program of China(Grant No.2020AAA0106800);Joint Foundation of the Ministry of Education(Grant No.8091B042235);Beijing Natural Science Foundation(Grant No.L211016);Fundamental Research Funds for the Central Universities(Grant No.2019JBZ110);Chinese Academy of Sciences(Grant No.OEIP-O-202004).
摘 要:Counterfactual explanations provide explanations by exploring the changes in effect caused by changes in cause. They have attracted significant attention in recommender system research to explore the impact of changes in certain properties on the recommendation mechanism. Among several counterfactual recommendation methods, item-based counterfactual explanation methods have attracted considerable attention because of their flexibility. The core idea of item-based counterfactual explanation methods is to find a minimal subset of interacted items(i.e., short length) such that the recommended item would topple out of the top-K recommendation list once these items have been removed from user interactions(i.e., good quality). Usually, explanations are generated by ranking the precomputed importance of items, which fails to characterize the true importance of interacted items due to separation from the explanation generation. Additionally, the final explanations are generated according to a certain search strategy given the precomputed importance. This indicates that the quality and length of counterfactual explanations are deterministic;therefore, they cannot be balanced once the search strategy is fixed. To overcome these obstacles, this study proposes learning-based counterfactual explanations for recommendation(LCER) to provide counterfactual explanations based on personalized recommendations by jointly modeling the factual and counterfactual preference. To achieve consistency between the computation of importance and generation of counterfactual explanations, the proposed LCER endows an optimizable importance for each interacted item, which is supervised by the goal of counterfactual explanations to guarantee its credibility. Because of the model's flexibility, the trade-off between quality and length can be customized by setting different proportions. The experimental results on four real-world datasets demonstrate the effectiveness of the proposed LCER over several state-of-the-art baselines, both quantitatively
关 键 词:recommender system explainable recommendation item-based explanation counterfactual in-ference counterfactual explanation
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...