Spark平台下日志清洗系统设计  被引量:2

Design of log cleaning system under Spark platform

在线阅读下载全文

作  者:李光明[1] 李垚周 李颀[1] LI Guang-ming;LI Yao-zhou;LI Qi(College of Electronic Information and Artificial Intelligence,Shaanxi University of Science and Technology,Xi’an 710021,China)

机构地区:[1]陕西科技大学电子信息与人工智能学院,陕西西安710021

出  处:《计算机工程与设计》2020年第12期3580-3587,共8页Computer Engineering and Design

基  金:陕西省科技厅农业科技攻关工程基金项目(2015NY028)。

摘  要:为解决传统日志清洗系统在数据量增大时出现计算缓慢,磁盘I/O消耗过大,清洗不完善以及数据倾斜等问题,提出基于Spark的日志清洗系统设计。使用Hadoop、Flume、Kafka、Spark Streaming、Hbase等大数据组件进行系统搭建;通过决策对象识别算法对日志中重复数据进行快速过滤、去重,优化Join操作以避免数据倾斜;实现清洗模块,提高数据清洗效率,达到优化系统的目标。实验结果表明,基于Spark的日志清洗系统相比传统的清洗系统而言,日志清洗速度和精准度得到大幅度提升,系统的性能更加稳定。To solve the problems of slow calculation,excessive disk I/O consumption,imperfect cleaning and data skew when the data volume of traditional log cleaning system increases,a log cleaning system design based on Spark was presented.Hadoop,Flume,Kafka,Spark Streaming,Hbase and other big data components were used to build the system.Through the decision object recognition algorithm,the duplicate data in the log were filtered and duplicated quickly,and the join operation was optimized to avoid data skew.The cleaning module was implemented to improve the data cleaning efficiency to achieve the goal of optimizing the system.Experimental results show that compared with the traditional cleaning system,the Spark-based log cleaning system not only improves the log cleaning speed and accuracy,but also makes the system performance more stable.

关 键 词:数据清洗 数据倾斜 决策对象识别算法 大数据组件 火花 

分 类 号:TP311[自动化与计算机技术—计算机软件与理论]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象