WP What is LRU Caching | Streamline Cache Efficiency | Imperva

LRU Caching

257 views
Web and Application Security

What Is LRU Caching? 

In computing, a caching algorithm is a way to optimize instructions for a software program or hardware device, so that the same instruction or data can be saved in a repository called a “cache” and reused. Caching improves performance by keeping recent or frequently used data in storage locations that are faster to access, or are less computationally expensive. When the cache is full, the algorithm must choose which items to discard to make room for new items.

Least recently used (LRU) is a caching algorithm. A basic caching mechanism discards the oldest items when the cache runs out of space. However, this can be inefficient in some cases. LRU is a more sophisticated approach that discards items that were not recently accessed.

LRU is typically implemented by keeping an “age bit” on each item in the cache, and using it to track LRU. In this implementation, each time a cache item is used, the age of all other items changes.

This is part of a series of articles about application security.

How LRU Caching Works 

An LRU cache uses a least recently used caching policy. A cache has a fixed size by design, limiting the number of elements it can store. Thus, developers must set up cache policies that decide how to delete items from the caches when they fill up.

LRU cache policies start by deleting the least recently used objects in the cache when it reaches the specified size limit. LRU policies are an alternative to cache policies like least frequently used (LFU) that remove the objects in a cache based on other approaches to the usage pattern. 

Least Recently Used (LRU) vs. Least Frequently Used (LFU) 

LFU is a caching algorithm that removes the least used items from the cache as soon as it reaches its capacity limit. This means you need to keep track of how often each item in the cache is used. When the capacity of the cache is exceeded, an eviction algorithm goes into action and entries in the cache are selected for deletion.

This approach to cache eviction is very effective when the access patterns of cached objects does not change frequently. The LRU algorithm removes assets that have not been accessed recently, whereas the LFU algorithm removes assets that are no longer needed after the usage spike is over.

The LFU approach may seem like an intuitive approach to memory management, but it also has its drawbacks: 

  • Consider cached items that are referenced repeatedly for a short period of time, and then are not accessed for a long period of time. Access is very fast and the counter increments significantly even without reuse. This means that blocks will be maintained in the cache, even though they are no longer needed.
  • New items that have just entered the cache can be purged again soon, because they start with a low usage counter, even though they may be used often at a later stage.

Implementing LRU Cache in Java 

Here is a tutorial on how to implement an LRU cache in Java:

Step 1: Create a new class for the cache. 

This class should have fields for the maximum size of the cache, the current size of the cache, and a map for storing the cache data. The map should use a key to store the data and a value to store the corresponding timestamp of when the data was last accessed.

LRU Caching 1 1

Step 2: Implement the get() method, which retrieves data from the cache. 

If the data is not in the cache, it should return null. If the data is in the cache, it should update the timestamp for the data in the map and return the data.

LRU Caching 2 1

Step 3: Implement the put() method, which adds data to the cache. 

If the cache is full, it should remove the least recently used data from the cache before adding the new data.

LRU Caching 3 1

That’s it! You now have a working LRU cache in Java. You can test it by creating a new instance of the LRUCache class and calling the put() and get() methods.