Leetcode 146. LRU Cache: Multiple Solutions in Python Code

“Design LRU cache” is a very typical interview problem asked in FAANG. This question shows the design capability of the candidate and how he selects the datatype while programming.

Caches are an integral component of any computing system. These are tiny containers that store data fragments to make them accessible easily thus, reducing computing time. 

The Least Recently Used (LRU) Cache primarily displaces the least used data packet when a newer data packet is received, and the size of the cache is exceeded (LRU Caches have a finite size). This phenomenon is known as the Eviction Policy. 

Although there are multiple ways to implement an LRU cache, two are discussed below.

A few things to keep in mind for the following methods is that to add an element, we will be using a general command of Put() and to retrieve an element, we will be using a general command of Get().

leetcode 146. LRU Cache - Leetcode 146. LRU Cache: Multiple Solutions in Python Code

Method 1: Using Doubly Linked List and Hash Table

An LRU cache primarily needs two parameters: fast lookup and fast data removal. Keeping in mind that we need the time complexity of O(1) for both, we use the following data structures:

  1. Look-up: Hash table
  2. Removal: Doubly linked list
Doubly linked list Python code for design LRU cache interview

A hash table has a constant set of instructions for looking up a key when asked, which is why it will return a time of `O(1). Another reason to use a hash table is that it individually stores the keys of assigned values.

The hash table makes it easy to use a get() command when the computer doesn’t know if the data is even present in the cache or not. The hash table quickly checks it up and gives a value True or False or how you want it to be. 

Secondly, the doubly linked list is used as the actual cache. The reason is that each element is directly connected to its front and back nodes.

Whenever the least used data element needs to be evicted, the pointers of the node change and shift towards the second-to-last element, and this is a constant set of instructions, so the time taken for this remains O(1).

Using these two data structures, an LRU cache can be implemented.

A Put() command will add an element to the cache while pushing the remaining elements to the back and evicting the last element from the cache memory if the size of the cache is exceeded.

Similarly, a Get() command will traverse the ‘key’ from the hash table and will return true or false depending on whether the key exists in the doubly linked list or not.

Accordingly, the whole operation of the LRU cache is just like this. The put() command assigns a key to the value, adds it in the hash table, and adds it in the doubly linked list as the Most Recently Used data element.

Suppose after adding, the size of the cache is exceeded. In that case, the pointer of the doubly linked list automatically moves from the Least Recently Used data element to the second-to-last element that the node is pointing in the direction of the MRU element.

class Node():
    def __init__(self, val=0, key=0, next=None, pre=None):
        self.val = val
        self.key = 0
        self.next = next
        self.pre = pre
        
class LRUCache:

    def __init__(self, capacity: int):
        self.cache = {}
        self.size = 0
        self.capacity = capacity
        self.head, self.tail = Node(), Node()

        self.head.next = self.tail
        self.tail.pre = self.head

    def get(self, key: int) -> int:
        node = self.cache.get(key, None)
        if not node:
            return -1

        self.move_to_head(node)

        return node.value

    def put(self, key: int, value: int) -> None:
        node = self.cache.get(key)

        if not node: 
            newNode = Node()
            newNode.key = key
            newNode.value = value

            self.cache[key] = newNode
            self.add_node(newNode)

            self.size += 1

            if self.size > self.capacity:
                tail = self.pop_tail()
                del self.cache[tail.key]
                self.size -= 1
        else:
            node.value = value
            self.move_to_head(node)
        
    def add_node(self, node):
        node.next = self.head.next
        node.pre = self.head
        
        self.head.next.pre = node
        self.head.next = node
    
    def remove_node(self, node):
        pre = node.pre
        new = node.next

        pre.next = new
        new.pre = pre

    def move_to_head(self, node):
        self.remove_node(node)
        self.add_node(node)

    def pop_tail(self):
        res = self.tail.pre
        self.remove_node(res)
        return res
        


# Your LRUCache object will be instantiated and called as such:
# obj = LRUCache(capacity)
# param_1 = obj.get(key)
# obj.put(key,value)

Method 2: using Deque and Hash Table

An LRU cache always needs the same basic parameters; we only need to modify our data structure according to our needs. The second method makes use of some easier-to-use data structures.

  1. Traversing the data: Hash-map
  2. Removal of an element: Deque

Both of these data structures also have the same set of instructions for every command which is why the time complexity for the whole process, may it be Put() or Get(), will remain O(1).

The hash-map keeps a record of the keys and their assigned values in the cache. The cache is represented by the Double Ended Queue (Deque).

A deque is used because it allows us to pop an element very quickly. A list can also work, but the time complexity for a list is O(n), so a deque is preferred. 

Whenever a get() command is run, the hash map is traversed to see if the key is present in the Deque or not.

If not, an error message is shown, but, simultaneously, the key with its value is added in the hash-map and the Deque (cache) as the Most Recently Used (MRU) data.

Deque data structure for design LRU cache in Python source  code

By doing so, if the cache’s size exceeds, the leftmost data element is evicted from the Deque and the hash map.

This is the operating procedure of the LRU cache using hash maps and deques. When you want to put an element in the cache, the command assigns a key to the value and records it in the hash map.

Next, the value is placed in the MRU place in the Deque. This goes on and on until the cache’s capacity is exhausted; the next time an element is added, the LRU element, left-most, is evicted from the cache.

Similarly, while getting an element from the cache, if the key of that element is found in the hash map, then the value of the element from the deque is deleted from its current and added to the right-most position of the deque, making the most recently used data element. 

A note to remember while dealing with caches is that an LRU cache always has a finite capacity and there can be numerous ways the cache can be implemented.

The part that we chose as the most important is the time and space complexity, meaning given any amount of space O(n), the time complexity of our program should be near O(1). This makes your program much more efficient and friendly to execute.

# Your LRUCache object will be instantiated and called as such:
# obj = LRUCache(capacity)
# param_1 = obj.get(key)
# obj.put(key,value)

from collections import deque
class LRUCache:

    def __init__(self, capacity: int):
        self.cache = deque()
        self.dict = {}
        self.capacity = capacity
    

    def get(self, key: int) -> int:
        if key in self.dict.keys():
            self.put(key, self.dict[key])
            return self.dict[key]
        else:return -1

    def put(self, key: int, value: int) -> None:
        if key in self.dict.keys():
            self.dict[key] = value
            self.cache.remove(key)
            self.cache.append(key)
        else:
            if len(self.dict) == self.capacity:
                keytoremove = self.cache[0]
                self.cache.remove(keytoremove)
                del(self.dict[keytoremove])

            self.cache.append(key)
            self.dict[key] = value
Scroll to Top