Project supported by the National Natural Science Foundation of China(Nos.61272141,61120106005,and 61303068);the National High-Tech R&D Program of China(No.2012AA01A301)
As the scale of supercomputers rapidly grows, the reliability problem dominates the system availability. Existing fault tolerance mechanisms, such as periodic checkpointing and process redundancy, cannot effectively f...
Project supported by the National Natural Science Foundation of China(No.61170083);the Specialized Research Fund for the Doctoral Program of Higher Education,China(No.20114307110001)
As we approach the exascale era in supercomputing, designing a balanced computer system with a powerful computing ability and low power requirements has becoming increasingly important. The graphics processing unit(...
Efficient data management is a key issue for environments distributed on a large scale such as the data cloud. This can be taken into account by replicating the data. The replication of data reduces the time of servic...
In this Exa byte scale era, data increases at an exponential rate. This is in turn generating a massive amount of metadata in the file system. Hadoop is the most widely used framework to deal with big data. Due to thi...