Jump to content

System Developed for Optimizing Use of Caches


Guest_Jim_*

Recommended Posts

A fundamental part of every processor is its caches, where the cores can keep what information they need for performing the operations assigned to them. As these caches are on on-chip, accessing them is very fast, but that does not mean there are not better ways to allocate the cache space among the cores. Researchers at MIT have developed a system called Jenga that finds the optimal distribution of not only the local cache but also DRAM for a CPU.

Two advantages to having caches built into a chip are that accessing the data in the caches is very fast and it takes little energy to do so, compared to off-chip memory. This makes optimizing the use of a chip's cache rather valuable, but modern chips need to be designed as a compromise between the capacity and latency needs of various programs. What Jenga does is measure the latency between each processor core and each cache it can access and uses that information to build a cache hierarchy. The hierarchy considers the differences between different levels of cache, like the on-chip L1, L2, and L3 caches but Jenga actually also makes measurements for using DRAM. Once the optimal cache level is determined, algorithms from an older system called Jigsaw are used to optimally allocate the caches for the entire chip. Jenga builds on Jigsaw by considering cache level in its algorithms, but the Jigsaw algorithms for optimizing along the latency-capacity curve are still valid once level is determined.

The researchers tested Jenga by simulating a system with 36 cores and found it could increase processing speed by 20-30% while reducing energy consumption by 30-85%. A fairly significant improvement for something that just improves how the hardware is used, and as more and more cores are added to CPUs, this could become an even more valuable system.

Source: MIT



Back to original news post

Share this post


Link to post
Share on other sites

×
×
  • Create New...