机构地区:[1]Computer School, Wuhan University, Wuhan 430072, China [2]State Key Laboratory of Software Engineering, Wuhan University, Wuhan 430072, China [3]School of Computer and Information, Hefei University of Technology, Hefei 230000, China
出 处:《Chinese Journal of Electronics》2017年第3期460-467,共8页电子学报(英文版)
基 金:supported by the National Natural Science Foundation of China(No.91118003,No.61170022,No.61373039,No.61402145,No.61502346);the Natural Science Foundation of Hubei Province(No.2015CFB338);the Natural Science Foundation of Anhui Province(No.1508085QF138);the Science and Technology Project of Jiangxi Province Education Department(No.GJJ150605)
摘 要:Spin-torque transfer RAM(STT-RAM) is a promising candidate to replace SRAM for larger Last level cache(LLC). However, it has long write latency and high write energy which diminish the benefit of adopting STT-RAM caches. A common observation for LLC is that a large number of cache blocks have never been referenced again before they are evicted. The write operations for these blocks, which we call dead writes, can be eliminated without incurring subsequent cache misses. To address this issue, a quantitative scheme called Feedback learning based dead write termination(FLDWT) is proposed to improve energy efficiency and performance of STT-RAM based LLC. FLDWT dynamically learns the block access behavior by using data reuse distance and data access frequency, and then classifies the blocks into dead blocks and live blocks. FLDWT terminates dead write block requests and improves the estimation accuracy via feedback information. Compared with STT-RAM baseline in the lastlevel caches, experimental results show that our scheme achieves energy reduction by 44.6% and performance improvement by 12% on average with negligible overhead.Spin-torque transfer RAM (STT-RAM) is a promising candidate to replace SRAM for larger Last level cache (LLC). However, it has long write latency and high write energy which diminish the benefit of adopting STT-RAM caches. A common observation for LLC is that a large number of cache blocks have never been referenced again before they are evicted. The write operations for these blocks, which we call dead writes, can be eliminated without incurring subsequent cache misses. To address this issu% a quantitative scheme called Feedback learn- ing based dead write termination (FLDWT) is proposed to improve energy efficiency and performance of STT-RAM based LLC. FLDWT dynamically learns the block access behavior by using data reuse distance and data access frequency, and then classifies the blocks into dead blocks and live blocks. FLDWT terminates dead write block requests and improves the estimation accuracy via feedback information. Compared with STT-RAM baseline in the last- level caches, experimental results show that our scheme achieves energy reduction by 44.6% and performance improvement by 12% on average with negligible overhead.
关 键 词:Dead blocks Energy efficiency Spintransfer torque RAM(STT-RAM) Last level cache(LLC)
分 类 号:TP333[自动化与计算机技术—计算机系统结构]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...