机构地区:[1]College of Computer Science, Beijing University of Technology [2]China Electronics Standardization Institute [3]CSIP Guangxi Section, Guilin University of Electronic Technology
出 处:《Chinese Journal of Electronics》2014年第2期315-321,共7页电子学报(英文版)
基 金:supported by the National Natural Science Foundation of China(No.61001178,No.61172053,No.61202266);National Soft Science Research Program(No.2010GXQ5D317);Beijing Natural Science Foundation(No.4102012,No.4112009);Scientific Research Common Program of Beijing Municipal Commission of Education(No.KM201210005024);the National High Technology Research and Development Program of China(863 Program)(No.2012AA011706)
摘 要:With the rapid development of information technology, short texts arising from socialized human interaction are gradually predominant in network information streams. Accelerating demands are requiring the industry to provide more effective classification of the brief texts.However, faced with short text documents, each of which contains only a few words, traditional document classification models run into difficulty. Aggressive documents expansion works remarkably well for many cases but suffers from the assumption of independent, identically distributed observations. We formalize a view of classification using Bayesian decision theory, treat each short text as observations from a probabilistic model, called a statistical language model, and encode classification preferences with a loss function defined by the language models and the external reference document. According to Vapnik's methods of Structural risk minimization(SRM), the optimal classification action is the one that minimizes the structural risk, which provides a result that allows one to trade off errors on the training sample against improved generalization performance. We conduct experiments by using several corpora of microblog-like data, and analyze the experimental results. With respect to established baselines,results of these experiments show that applying our proposed document expansion method produces better chance to achieve the improved classification performance.With the rapid development of information technology, short texts arising from socialized human inter- action are gradually predominant in network information streams. Accelerating demands are requiring the industry to provide more effective classification of the brief texts. However, faced with short text documents, each of which contains only a few words, traditional document classifi- cation models run into difficulty. Aggressive documents expansion works remarkably well for many cases but suf- fers from the assumption of independent, identically dis- tributed observations. We formalize a view of classification using Bayesian decision theory, treat each short text as ob- servations from a probabilistic model, called a statistical language model, and encode classification preferences with a loss function defined by the language models and the ex- ternal reference document. According to Vapnik's meth- ods of Structural risk minimization (SRM), the optimal classification action is the one that minimizes the struc- tural risk, which provides a result that allows one to trade off errors on the training sample against improved gener- alization performance. We conduct experiments by using several corpora of microblog-like data, and analyze the ex- perimental results. With respect to established baselines, results of these experiments show that applying our pro- posed document expansion method produces better chance to achieve the improved classification performance.
关 键 词:Text classification Short texts Language model Document expansion External reference.
分 类 号:TP391.1[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...