检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:Dongjie Zhu Haiwen Du Yundong Sun Xiaofang Li Rongning Qu Hao Hu Shuangshuang Dong Helen Min Zhou Ning Cao
机构地区:[1]School of Computer Science and Technology,Harbin Institute of Technology,Weihai,264209,China [2]School of Science,Harbin Institute of Technology,Weihai,264209,China [3]School of Engineering,Manukau Institute of Technology,Auckland,2241,New Zealand [4]College of Mathematics and Computer Science,Xinyu University,Xinyu,338004,China [5]College of Information Engineering,Sanming University,Sanming,365004,China [6]School of Astronautics,Harbin Institute of Technology,Harbin,150001,China
出 处:《Computers, Materials & Continua》2020年第5期979-993,共15页计算机、材料和连续体(英文)
基 金:This work is supported by‘The Fundamental Research Funds for the Central Universities(Grant No.HIT.NSRIF.201714)’;‘Weihai Science and Technology Development Program(2016DXGJMS15)’;‘Key Research and Development Program in Shandong Provincial(2017GGX90103)’.
摘 要:In distributed storage systems,file access efficiency has an important impact on the real-time nature of information forensics.As a popular approach to improve file accessing efficiency,prefetching model can fetches data before it is needed according to the file access pattern,which can reduce the I/O waiting time and increase the system concurrency.However,prefetching model needs to mine the degree of association between files to ensure the accuracy of prefetching.In the massive small file situation,the sheer volume of files poses a challenge to the efficiency and accuracy of relevance mining.In this paper,we propose a massive files prefetching model based on LSTM neural network with cache transaction strategy to improve file access efficiency.Firstly,we propose a file clustering algorithm based on temporal locality and spatial locality to reduce the computational complexity.Secondly,we propose a definition of cache transaction according to files occurrence in cache instead of time-offset distance based methods to extract file block feature accurately.Lastly,we innovatively propose a file access prediction algorithm based on LSTM neural network which predict the file that have high possibility to be accessed.Experiments show that compared with the traditional LRU and the plain grouping methods,the proposed model notably increase the cache hit rate and effectively reduces the I/O wait time.
关 键 词:Massive files prefetching model cache transaction distributed storage systems LSTM neural network
分 类 号:TP3[自动化与计算机技术—计算机科学与技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:18.227.209.41