面向GPU的内存管理与应用  被引量:1

GPU-Oriented Memory Management and its Application

在线阅读下载全文

作  者:徐延东[1] 华蓓[1] 

机构地区:[1]中国科学技术大学计算机科学与技术学院,安徽合肥230027

出  处:《电子技术(上海)》2017年第7期86-90,83,共6页Electronic Technology

摘  要:随着GPU的计算能力、访存能力和设备内存容量的不断提高,将GPU作为独立的数据存储节点来使用正在成为可能。动态内存管理是数据存储节点的必备功能,但是大量的并发线程以及单指令流多数据流的执行方式,使得GPU上的动态内存分配面临冲突率高、线程阻塞严重等问题。本文针对GPU体系结构特点以及数据存储类应用对设备内存管理的需求,在NVIDIA GPU上设计并实现了一个GPU设备内存管理器,可有效降低内存分配竞争,并提高内存分配速度。基于所设计的设备内存管理方案,本文将一个无锁哈希表实现移植到了GPU上,以加速GPU上的索引操作。实验表明,本文实现的GPU内存管理器和无锁哈希表具有较好的性能。As the Graphics Processing Units(GPU) continuously increase their computing power, memory bandwidth, and device memory capacity, it is possible for the GPU to take on all the work of a data storage node. Dynamic memory management is a necessary function for data storage applications; however, this work is extremely hard for the GPU as a large number of concurrent threads as well as the Single Instruction Multiple Data execution mode may introduce serious data racing and thread blocking. In this paper, a high efficient GPU memory allocator is implemented based on the architectural features of GPU and the memory management requirements of data storage applications, which effectively reduces memory allocation competition as well as memory allocation time. Based on the GPU memory allocator, this paper also transplants a lock-free hash table for CPU on the GPU, to speed up the index operation on GPU. Experiments show that our GPU memory allocator and lock-free hash table implementation have good performance.

关 键 词:GPU通用计算 内存管理 无锁编程 

分 类 号:TP311.56[自动化与计算机技术—计算机软件与理论]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象