基于GPU平台的有效字典压缩与解压缩技术  

Efficient Dictionary-Based Compression/Decompression Techniques Using GPU

在线阅读下载全文

作  者:覃子姗 顾璠[1,2] 秦晓科 陈铭松[1,2] 

机构地区:[1]华东师范大学软件学院,上海200062 [2]华东师范大学上海市高可信计算重点实验室,上海200062 [3]英伟达(NVIDIA),奥兰多fl32826

出  处:《计算机科学与探索》2014年第5期525-536,共12页Journal of Frontiers of Computer Science and Technology

基  金:国家自然科学基金青年项目No.61202103;高等学校博士学科点专项科研基金No.20110076120025;软硬件协同设计技术与应用教育部工程研究中心开放课题No.2013001~~

摘  要:压缩技术被广泛应用于数据存储和传输中,然而由于其内在的串行特性,大多数已有的基于字典的压缩与解压缩算法被设计在CPU上串行执行。为了探究使用图形处理器(graphic processing unit,GPU)对压缩与解压缩过程潜在性能的提升,结合合并内存访问与并行组装的技术,基于CUDA(compute unified device architecture)平台研究了两种并行压缩与解压缩方法:基于字典的无状态压缩和基于字典的LZW压缩。实验结果表明,与传统的单核实现比较,所提方法能够显著改善已有的基于字典的串行压缩与解压缩算法的性能。Compression techniques are widely used in data storage and transmission. However, due to the inherent sequential nature, most existing dictionary-based compression/decompression algorithms are designed for sequential execution on CPUs. To explore the potential performance improvements of compression and decompression processes using graphic processing unit (GPU), by investigating the techniques of coalescing memory access and parallel assem-bling, this paper studies two parallel implementations of dictionary-based techniques based on CUDA (compute unified device architecture), stateless compression/decompression and LZW compression/decompression. The experimental results demonstrate that, compared with traditional sequential implementations based on single core, the two pro-posed approaches can improve the performance of existing sequential dictionary-based compression/decompression algorithms drastically.

关 键 词:图形处理器(GPU) 统一计算设备架构(CUDA) 基于字典的压缩与解压缩 GRAPHIC processing unit (GPU) compute unified device architecture (CUDA) 

分 类 号:TP39[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象