基于重复数据删除技术的雾存储数据去冗余方案  被引量:4

DATA DE-REDUNDANCY SCHEME IN FOG STORAGE BASED ON DATA DE-DUPLICATION TECHNOLOGY

在线阅读下载全文

作  者:陈思佳 温蜜 陈珊 Chen Sijia;Wen Mi;Chen Shan(College of Computer Science and Technology,Shanghai University of Electric Power,Shanghai 200090,China)

机构地区:[1]上海电力大学计算机科学与技术学院,上海200090

出  处:《计算机应用与软件》2020年第2期18-24,98,共8页Computer Applications and Software

基  金:国家自然科学基金项目(61872230,61572311)。

摘  要:雾计算作为云中心在网络边缘的延伸,将不需要放在云端的数据直接进行存储和处理,从而可以快速响应底端设备的需求。为了解决现有方案中频繁的磁盘输入和输出(I/O),针对雾节点中存储数据的冗余问题,提出重复数据删除方案(DeFog)。利用红黑树的快速查找机制,在内存中构建数据指纹表,通过二次Hash获得索引表。固定时刻刷新内存中的指纹表保存在磁盘中,日志文件记录每次数据更新,这样在系统发生崩溃机器重启时,磁盘中的指纹表会与日志文件合并构建更新后的指纹表。通过在标准数据集中的实验与其他方案进行对比,证明了DeFog在查询效率上提高了54.1%,运行时间降低了42.1%。As an extension of the cloud data center at the edge of the network,fog computing will directly store and process data that does not need to be placed in the cloud,so as to quickly respond to the needs of the bottom devices.In order to solve the frequent disk input and output(I/0)in the existing scheme,a de-duplication scheme(DeFog)is proposed for the redundancy problem of storing data in the fog node.Using the fast search mechanism of the RB tree,a data fingerprint table was built in the memory,and an index table was obtained through the secondary hash.The fingerprint table in the memory was saved at a fixed time in the disk,and the log file recorded every data update,When the system crashed and the machine restarted,the fingerprint table on the disk would be merged with the log file to construct the updated fingerprint table.Through the experiments in the standard datasets and the comparison with other schemes,it is proved that DeFog improves the query efficiency by 54.1%and reduces the running time by 42.1%.

关 键 词:重复数据删除 雾计算 数据冗余 红黑树 索引表 内存 I/O优化 

分 类 号:TP3[自动化与计算机技术—计算机科学与技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象