检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
出 处:《Frontiers of Computer Science》2014年第5期785-792,共8页中国计算机科学前沿(英文版)
基 金:This work was supported in part by the National Natu- ral Science Foundation of China (Grant No. 61170151 ), the National Natural Science Foundation of Jiangsu Province (BK2011728), and was sponsored by the QingLan Project and the Fund Research Funds for the Central Uni- versifies (NZ2013306).
摘 要:Linear discriminant analysis (LDA) is one of the most popular supervised dimensionality reduction (DR) tech- niques and obtains discriminant projections by maximizing the ratio of average-case between-class scatter to average- case within-class scatter. Two recent discriminant analysis algorithms (DAS), minimal distance maximization (MDM) and worst-case LDA (WLDA), get projections by optimiz- ing worst-case scatters. In this paper, we develop a new LDA framework called LDA with worst between-class separation and average within-class compactness (WSAC) by maximiz- ing the ratio of worst-case between-class scatter to average- case within-class scatter. This can be achieved by relaxing the trace ratio optimization to a distance metric learning prob- lem. Comparative experiments demonstrate its effectiveness. In addition, DA counterparts using the local geometry of data and the kernel trick can likewise be embedded into our frame- work and be solved in the same way.Linear discriminant analysis (LDA) is one of the most popular supervised dimensionality reduction (DR) tech- niques and obtains discriminant projections by maximizing the ratio of average-case between-class scatter to average- case within-class scatter. Two recent discriminant analysis algorithms (DAS), minimal distance maximization (MDM) and worst-case LDA (WLDA), get projections by optimiz- ing worst-case scatters. In this paper, we develop a new LDA framework called LDA with worst between-class separation and average within-class compactness (WSAC) by maximiz- ing the ratio of worst-case between-class scatter to average- case within-class scatter. This can be achieved by relaxing the trace ratio optimization to a distance metric learning prob- lem. Comparative experiments demonstrate its effectiveness. In addition, DA counterparts using the local geometry of data and the kernel trick can likewise be embedded into our frame- work and be solved in the same way.
关 键 词:dimensionality reduction linear discriminantanalysis the worst separation the average compactness.
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.222