Tuesday, 20 December 2011

Visualisation of my implementation of a multi-level pseudo-LRU cache

I've just finished my Java implementation of a multi-level pseudo-LRU cache.

The abstract code is just 10KB, and a simple direct implementation of the abstract classes is a kilobyte and a half.
Initially, I though of implementing a LFU cache, but then I realised that for caches that are large enough, the pseudo-LRU algorithm would work nicely.
The four main actions on a cache are:
  • adding an item to the cache
  • getting an item from the cache
  • invalidating an item in the cache
  • trashing the whole cache
The added items should implement the Cacheable inteface, which requires the implementations to have the `getPhysicalSize()` method, which should return the expected physical size of the item in bytes. Such approach has, of course, it's liabilities but I think it would be good enough for some practical use-cases.

What's special about my cache is that:
  • It can chain multiple caches, so that removed items from a front cache are not immediately lost, but moved to a back cache.
This is useful in situations when the back-cache is slower, but considerably large than the front cache (like a file-based cache), but using it is still faster than recreating the cached item.
On the other hand, one can have multiple smaller specific front caches, all backed-up by a single larger back cache. This would help if there's a tendency to have items from the same type at a given time, effectively expanding their local caches.
  • When there's not enough space in the cache, items need to be removed. Often, only a few items need to be removed at a time. Removing a very small amount of items would mean that the next time one needs to add an item there might again be not enough space in the cache. To remedy this inefficiency, one can choose a minimum limit for the amount space to be freed when trash cleaning is triggered.
The code is available at GitHub as usual.

No comments:

Post a Comment