检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
机构地区:[1]国家并行计算机工程技术研究中心,北京100080
出 处:《计算机科学》2013年第3期104-106,120,共4页Computer Science
摘 要:InfiniBand是目前HPC系统互连的主流网络之一,其提供的可靠连接传输服务因为支持RDMA、原子操作等功能而被广泛应用于MPI等并行应用编程模型。但是支撑可靠连接所需的消息队列及缓冲区开销往往会随着并行规模的扩大而急剧增加,从而制约了应用规模的扩大。为了解决这种内存开销带来的消息可扩展性问题,先从In-finiBand传输优化方面介绍了共享接收队列和扩展可靠连接技术,然后基于并行通信模型提出了分组连接技术。通过这些技术可以将节点内存开销减少2个数量级,并且开销不会随并行规模的扩大而明显增加。InfiniBand is one of the most promising network interconnecting technologies in HPC. Its reliable connection services support RDMA, atomic operations, ete, and are widely employed in MPI and other similar parallel programming models. However,the cost of message queues and buffers for reliable connections rises dramatically when the scale of parallelism increases. As a result, it becomes the bottleneck for InfiniBand in large scale applications. To solve the pro- blem, this paper provided shared receiving queue and extended reliable connection, then brought forward the group con- nection technology based parallel communication mode. On hand, the memory cost in the computing node is cut down for at least 100 times smaller. And the most amazing thing is that the cost of memory will be relatively constant when the scale of parallelism is upgraded.
关 键 词:可扩展 共享接收队列 分组连接 INFINIBAND
分 类 号:TP393[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.233