Skip to content

Implement LRU Cache

advanced18 min read

Why LRU Cache Shows Up in Every FAANG Interview

The LRU Cache is the most commonly asked data structure question at Google, Meta, Amazon, and Microsoft. Not because caches are exotic — but because it tests whether you can combine two data structures to achieve O(1) time complexity for both reads and writes. That's the real challenge.

LRU stands for Least Recently Used. When the cache is full and a new entry needs space, you evict the entry that hasn't been accessed for the longest time. Simple idea, tricky implementation.

Mental Model

Imagine a bookshelf that only fits 5 books. Every time you read a book, you place it on the right end. When you buy a new book and the shelf is full, you remove the book on the far left — that's the one you haven't touched in the longest time. The shelf is your cache, "reading" is accessing, and "buying a new book" is inserting.

The Interface

Every LRU cache needs exactly two operations:

  • get(key) — Return the value if the key exists, otherwise return -1. This also marks the key as "recently used."
  • put(key, value) — Insert or update the key-value pair. If the cache is at capacity, evict the least recently used entry first.

Both operations must be O(1) time. That's the constraint that makes this interesting.

const cache = new LRUCache(3); // capacity = 3

cache.put(1, "a");
cache.put(2, "b");
cache.put(3, "c");
// Cache: 1→a, 2→b, 3→c (1 is least recent)

cache.get(1);       // "a" — now 1 is most recent
// Cache: 2→b, 3→c, 1→a

cache.put(4, "d");  // Evicts key 2 (least recently used)
// Cache: 3→c, 1→a, 4→d

cache.get(2);       // -1 (evicted)
Quiz
After these operations on a capacity-3 LRU cache: put(1,'x'), put(2,'y'), put(3,'z'), get(2), put(4,'w') — which key gets evicted?

Approach 1: The Classic — Doubly Linked List + HashMap

This is the textbook approach and what most interviewers expect. Two data structures working together:

  • A HashMap (plain object or Map) gives O(1) lookup by key
  • A doubly linked list maintains the access order, with O(1) insertion and removal

The linked list has two sentinel nodes — head and tail — that act as boundaries. The least recently used node sits right after head, and the most recently used sits right before tail.

class Node {
  constructor(key, value) {
    this.key = key;
    this.value = value;
    this.prev = null;
    this.next = null;
  }
}

Why store the key on the node? When evicting, you need to remove the node from the linked list and delete its entry from the HashMap. The node needs to know its own key so you can do that deletion.

Building the Cache

class LRUCache {
  constructor(capacity) {
    this.capacity = capacity;
    this.map = new Map();

    this.head = new Node(0, 0);
    this.tail = new Node(0, 0);
    this.head.next = this.tail;
    this.tail.prev = this.head;
  }

  _remove(node) {
    node.prev.next = node.next;
    node.next.prev = node.prev;
  }

  _addToEnd(node) {
    node.prev = this.tail.prev;
    node.next = this.tail;
    this.tail.prev.next = node;
    this.tail.prev = node;
  }

  get(key) {
    if (!this.map.has(key)) return -1;

    const node = this.map.get(key);
    this._remove(node);
    this._addToEnd(node);
    return node.value;
  }

  put(key, value) {
    if (this.map.has(key)) {
      this._remove(this.map.get(key));
    }

    const node = new Node(key, value);
    this._addToEnd(node);
    this.map.set(key, node);

    if (this.map.size > this.capacity) {
      const lru = this.head.next;
      this._remove(lru);
      this.map.delete(lru.key);
    }
  }
}

Why Sentinel Nodes?

Without sentinel nodes (head and tail), every _remove and _addToEnd operation needs null checks: "Is this the first node? Is this the last node?" Sentinels eliminate all edge cases. The real data always lives between the sentinels, so you never have to worry about updating the list's head or tail pointers.

Execution Trace
put(1,'a')
head <-> [1:a] <-> tail | Map: {1: node}
First entry — added before tail
put(2,'b')
head <-> [1:a] <-> [2:b] <-> tail | Map: {1, 2}
New entries always go to the end (most recent)
put(3,'c')
head <-> [1:a] <-> [2:b] <-> [3:c] <-> tail | Map: {1, 2, 3}
Cache is now at capacity (3)
get(1)
head <-> [2:b] <-> [3:c] <-> [1:a] <-> tail | Map: {1, 2, 3}
Node 1 removed from position, re-added at end
put(4,'d')
head <-> [3:c] <-> [1:a] <-> [4:d] <-> tail | Map: {1, 3, 4}
Over capacity — evict head.next (key 2), add key 4
get(2)
Returns -1 | Cache unchanged
Key 2 was evicted, no longer in Map
Quiz
Why does each node in the doubly linked list store its own key?

Approach 2: The Modern Way — JavaScript Map

Here is something most candidates miss: JavaScript's Map already maintains insertion order. When you iterate a Map, entries come out in the order they were inserted. This means Map can serve as both the lookup table and the ordering mechanism.

The trick: when you access an existing key, delete it and re-insert it. This moves it to the end of the insertion order. The first key in the Map is always the least recently used.

class LRUCache {
  constructor(capacity) {
    this.capacity = capacity;
    this.map = new Map();
  }

  get(key) {
    if (!this.map.has(key)) return -1;

    const value = this.map.get(key);
    this.map.delete(key);
    this.map.set(key, value);
    return value;
  }

  put(key, value) {
    if (this.map.has(key)) {
      this.map.delete(key);
    }

    this.map.set(key, value);

    if (this.map.size > this.capacity) {
      const lruKey = this.map.keys().next().value;
      this.map.delete(lruKey);
    }
  }
}

That's it. The entire LRU cache in about 20 lines.

Is Map delete-and-re-insert actually O(1)?

In V8, Map is implemented as a hash table with an ordered linked list threaded through the entries (the spec requires insertion-order iteration). Both delete and set are amortized O(1). The keys().next() call to get the first key is O(1) as well — it just follows the head of the internal linked list. So yes, this approach genuinely achieves O(1) for both get and put. The V8 team optimized this path specifically because ordered iteration is a spec requirement for Map and Set.

Execution Trace
put(1,'a')
Map: {1:'a'} | Size: 1
Simple insertion
put(2,'b')
Map: {1:'a', 2:'b'} | Size: 2
Insertion order: 1, then 2
put(3,'c')
Map: {1:'a', 2:'b', 3:'c'} | Size: 3
At capacity
get(1)
Map: {2:'b', 3:'c', 1:'a'} | Size: 3
Delete key 1, re-insert — now it is last (most recent)
put(4,'d')
Map: {3:'c', 1:'a', 4:'d'} | Size: 3
Insert key 4, over capacity — map.keys().next().value gives key 2, evicted
get(2)
Returns -1 | Map unchanged
Key 2 was evicted

Comparing the Two Approaches

AspectDoubly Linked List + HashMapMap-based
Time complexityO(1) get/putO(1) amortized get/put
Lines of code~50~20
Memory overheadExtra prev/next pointers per nodeMinimal (Map handles it internally)
Interview preferenceMost interviewers expect thisShows deep JS knowledge
Production readinessExplicit control over internalsRelies on V8 Map implementation details
Tip

In a real interview, implement the doubly linked list version first — it shows you understand the underlying data structures. Then mention the Map-based approach as a "did you know" bonus. Interviewers love candidates who know both approaches.

Quiz
What built-in behavior of JavaScript Map makes the Map-based LRU cache possible?

Capacity Management and Edge Cases

A production-ready LRU cache needs to handle several edge cases that interviewers love to probe:

Updating an Existing Key

When put is called with a key that already exists, you update the value and move it to the most recent position. Both implementations handle this by removing first, then re-inserting.

Capacity of 1

A cache with capacity 1 should work correctly. Every put evicts the previous entry. This is actually a great test case to verify your sentinel node logic is correct.

Capacity of 0

Depending on the problem constraints, you might need to handle this. If capacity is 0, every get returns -1 and put is a no-op.

// Test your implementation with edge cases
const cache = new LRUCache(1);
cache.put(1, "a");
cache.put(2, "b"); // Evicts key 1
cache.get(1);      // -1
cache.get(2);      // "b"
cache.put(3, "c"); // Evicts key 2
cache.get(2);      // -1
cache.get(3);      // "c"

Real-World Use Cases

LRU caches are everywhere in production systems. Here are patterns you will actually encounter:

API Response Caching

const apiCache = new LRUCache(100);

async function fetchUser(id) {
  const cached = apiCache.get(id);
  if (cached !== -1) return cached;

  const response = await fetch(`/api/users/${id}`);
  const user = await response.json();
  apiCache.put(id, user);
  return user;
}

Memoization with Bounded Memory

Standard memoization grows unboundedly. An LRU cache keeps memory under control:

function memoizeWithLRU(fn, capacity = 50) {
  const cache = new LRUCache(capacity);

  return function (...args) {
    const key = JSON.stringify(args);
    const cached = cache.get(key);
    if (cached !== -1) return cached;

    const result = fn.apply(this, args);
    cache.put(key, result);
    return result;
  };
}

Search Suggestion Caching

const suggestionCache = new LRUCache(20);

async function getSuggestions(query) {
  const cached = suggestionCache.get(query);
  if (cached !== -1) return cached;

  const results = await fetchSuggestions(query);
  suggestionCache.put(query, results);
  return results;
}

// "jav" -> cached
// "java" -> cached
// "javas" -> cached
// User clears and types "pyth" -> new fetch
// If cache is full, oldest query results get evicted

Browser Caches

The browser itself uses LRU-like strategies for its HTTP cache, DNS cache, and back-forward cache. Chrome's V8 uses similar eviction strategies for its code cache (compiled bytecode). When you clear "browsing data," you are essentially resetting these LRU caches.

Quiz
Why is an LRU cache better than unlimited memoization for a function called with many unique arguments?

The Interview Gameplan

When you get this question in an interview, here is the playbook:

  1. Clarify the interface — Ask about return types (null vs -1 for misses), key/value types, thread safety requirements
  2. State the constraint — "Both get and put need to be O(1), so I will combine a HashMap for O(1) lookup with a doubly linked list for O(1) ordering"
  3. Draw the sentinel pattern — Sketch head <-> data nodes <-> tail and explain why sentinels simplify the code
  4. Implement Node class first — Show the building block
  5. Implement helper methods_remove and _addToEnd before get and put
  6. Walk through an example — Trace through 4-5 operations, showing the list state at each step
  7. Discuss the Map alternative — Bonus points for knowing JavaScript-specific optimizations
What developers doWhat they should do
Forgetting to update the ordering on get (treating get as read-only)
LRU tracks access recency, not just insertion time. A get counts as an access.
get must move the accessed node to the most recent position
Not storing the key on the linked list node
Without the key on the node, you cannot remove the HashMap entry in O(1) during eviction.
Each node needs both key and value so eviction can delete from the HashMap
Using an array and shifting elements on access
Array.shift() and Array.splice() are O(n) operations. A linked list gives true O(1) node removal with pointer manipulation.
Use a doubly linked list for O(1) removal and insertion
Checking capacity before inserting instead of after
If you evict before inserting an update to an existing key, you might evict a valid entry unnecessarily. Insert/update first, then check size.
Insert first, then check if size exceeds capacity and evict
Key Rules
  1. 1LRU evicts the entry that has not been accessed for the longest time — both get and put count as access
  2. 2O(1) get and put requires combining a HashMap (fast lookup) with a doubly linked list (fast ordering)
  3. 3Sentinel nodes (dummy head and tail) eliminate all null-check edge cases in the linked list
  4. 4Each linked list node must store its key so the eviction step can delete the HashMap entry
  5. 5JavaScript Map maintains insertion order — delete and re-insert moves a key to the end, enabling a 20-line LRU cache
Quiz
What happens if you call put(key, newValue) with a key that already exists in the LRU cache?