Implement Memoize, Curry, and Compose
The Functional Programming Trio
Three functions show up in nearly every FAANG frontend interview: memoize, curry, and compose. Not because interviewers love abstract math, but because implementing them tests everything at once — closures, recursion, variadic arguments, cache strategies, and deep understanding of how functions work in JavaScript.
Here's the thing most people miss: these aren't just interview tricks. You use memoized selectors in Redux. React's useMemo is memoization. Middleware pipelines in Express and Redux are composition. Partial application (currying's cousin) is everywhere in event handlers. Understanding how to build them changes how you think about functions forever.
Think of these three as a toolkit for function manipulation. Memoize gives a function a memory — it remembers previous answers. Curry gives a function patience — it waits until it has all the arguments before executing. Compose gives functions a conveyor belt — output from one feeds into the next. Together, they let you build complex behavior from small, reusable pieces.
Part 1: Memoize
Memoization caches function results so that calling the function again with the same arguments returns the cached value instead of recomputing. It trades memory for speed.
The Basic Version
Start with the simplest case — a function that takes a single primitive argument:
function memoize(fn) {
const cache = new Map();
return function (...args) {
const key = args[0];
if (cache.has(key)) {
return cache.get(key);
}
const result = fn.apply(this, args);
cache.set(key, result);
return result;
};
}
This works for fibonacci(10) or factorial(5), but it falls apart the moment you pass multiple arguments or objects. The key is just args[0] — completely wrong for multi-argument functions.
Multi-Argument Key Resolution
The first real challenge: how do you turn multiple arguments into a cache key?
function memoize(fn, resolver) {
const cache = new Map();
return function (...args) {
const key = resolver
? resolver.apply(this, args)
: args.length === 1
? args[0]
: JSON.stringify(args);
if (cache.has(key)) {
return cache.get(key);
}
const result = fn.apply(this, args);
cache.set(key, result);
return result;
};
}
The resolver parameter is how Lodash does it — let the caller decide how to compute the cache key. Without a resolver, we fall back to JSON.stringify, which works for primitives and plain objects but has real limitations.
JSON.stringify is a surprisingly bad default key resolver. It ignores undefined values, converts functions to null, treats NaN as null, doesn't preserve property order across engines (though V8 does maintain insertion order for string keys), and is O(n) for every call. For hot paths, a custom resolver that returns a string or number is significantly faster.
WeakMap for Object Arguments
When memoizing functions that take objects, you have a memory leak problem. A regular Map holds strong references to its keys — even if the original object is garbage collected elsewhere, the Map keeps it alive. WeakMap fixes this:
function memoize(fn) {
const primitiveCache = new Map();
const objectCache = new WeakMap();
return function (...args) {
if (args.length === 1) {
const arg = args[0];
const isObj = arg !== null && (typeof arg === "object" || typeof arg === "function");
const cache = isObj ? objectCache : primitiveCache;
if (cache.has(arg)) {
return cache.get(arg);
}
const result = fn.call(this, arg);
cache.set(arg, result);
return result;
}
const key = JSON.stringify(args);
if (primitiveCache.has(key)) {
return primitiveCache.get(key);
}
const result = fn.apply(this, args);
primitiveCache.set(key, result);
return result;
};
}
Why WeakMap cannot replace Map entirely
WeakMap only accepts objects as keys — no primitives. And you cannot iterate over a WeakMap or check its size, because its entries are subject to garbage collection at any time. This means WeakMap is perfect for "cache this object-related computation without preventing GC" but useless for "show me all cached results." For a memoize function that handles both objects and primitives, you need both.
LRU Cache Variant
Unbounded caches are dangerous in production. A function called with 100,000 unique arguments eats 100,000 cache entries. An LRU (Least Recently Used) cache caps the size and evicts the oldest entries:
function memoizeWithLRU(fn, maxSize = 100) {
const cache = new Map();
return function (...args) {
const key = JSON.stringify(args);
if (cache.has(key)) {
const value = cache.get(key);
cache.delete(key);
cache.set(key, value);
return value;
}
const result = fn.apply(this, args);
if (cache.size >= maxSize) {
const firstKey = cache.keys().next().value;
cache.delete(firstKey);
}
cache.set(key, result);
return result;
};
}
The trick here is that Map maintains insertion order. When we access an existing entry, we delete and re-insert it to move it to the end. The "oldest" entry is always cache.keys().next().value — the first one inserted that hasn't been accessed since. This gives us O(1) get, set, and eviction without a doubly-linked list.
Preserving this Context
A subtle but critical detail that interviewers love to test: does your memoize preserve this?
const obj = {
multiplier: 3,
calculate: memoize(function (x) {
return x * this.multiplier;
}),
};
obj.calculate(5); // Should return 15, not NaN
This works in our implementation because we use fn.apply(this, args) — the this from the memoized wrapper is forwarded to the original function. If we used fn(...args) instead, this would be undefined (strict mode) or window.
- 1Always use fn.apply(this, args) or fn.call(this, ...args) to preserve the calling context
- 2Use a custom resolver parameter for flexible cache key strategies
- 3Use WeakMap for object keys to prevent memory leaks from held references
- 4Cap cache size with LRU eviction in production — unbounded caches are time bombs
- 5JSON.stringify is a convenient but slow default — custom resolvers are faster for hot paths
Part 2: Curry
Currying transforms a function that takes multiple arguments into a chain of functions that each take a single argument. f(a, b, c) becomes f(a)(b)(c).
The Difference Between Currying and Partial Application
People confuse these constantly. Currying always produces unary (single-argument) functions in a chain. Partial application fixes some arguments and returns a function that takes the rest — but that function might still take multiple arguments.
// Currying: f(a, b, c) → f(a)(b)(c)
const curriedAdd = curry(add);
curriedAdd(1)(2)(3); // 6
// Partial application: f(a, b, c) → f'(c) where a and b are fixed
const addTo3 = add.bind(null, 1, 2);
addTo3(3); // 6
Basic Curry with Arity Detection
The key insight: a curried function collects arguments until it has enough to call the original function. "Enough" means the original function's length property — the number of declared parameters.
function curry(fn) {
return function curried(...args) {
if (args.length >= fn.length) {
return fn.apply(this, args);
}
return function (...nextArgs) {
return curried.apply(this, [...args, ...nextArgs]);
};
};
}
Let's trace through it:
This implementation also supports passing multiple arguments at once:
const curriedAdd = curry((a, b, c) => a + b + c);
curriedAdd(1)(2)(3); // 6
curriedAdd(1, 2)(3); // 6
curriedAdd(1)(2, 3); // 6
curriedAdd(1, 2, 3); // 6 — all at once also works
Placeholder Support
Advanced currying lets you skip arguments using a placeholder:
const _ = Symbol("placeholder");
function curryWithPlaceholders(fn) {
return function curried(...args) {
const hasPlaceholder = args.some((arg) => arg === _);
const realArgCount = args.filter((arg) => arg !== _).length;
if (!hasPlaceholder && realArgCount >= fn.length) {
return fn.apply(this, args);
}
return function (...nextArgs) {
let nextIndex = 0;
const merged = args.map((arg) =>
arg === _ && nextIndex < nextArgs.length
? nextArgs[nextIndex++]
: arg,
);
const remaining = nextArgs.slice(nextIndex);
return curried.apply(this, [...merged, ...remaining]);
};
};
}
Usage:
const curriedAdd = curryWithPlaceholders((a, b, c) => a + b + c);
curriedAdd(_, 2, _)(1)(3); // 6 — fills first placeholder with 1, second with 3
curriedAdd(_, _, 3)(1)(2); // 6 — fills placeholders left-to-right
Why Symbol for placeholders?
We use Symbol('placeholder') instead of undefined or null because those are legitimate argument values. If we used undefined as our placeholder, calling curriedAdd(undefined, 2, 3) would be ambiguous — does the caller mean "skip the first argument" or "pass undefined as the first argument"? A Symbol is guaranteed unique, so there is no ambiguity.
The fn.length Gotcha
fn.length counts only parameters before the first one with a default value or rest parameter:
function a(x, y, z) {} // a.length = 3
function b(x, y = 1, z) {} // b.length = 1 (stops at y)
function c(...args) {} // c.length = 0
function d(x, ...rest) {} // d.length = 1
This means currying a function with default parameters or rest parameters won't work correctly without an explicit arity override:
function curry(fn, arity = fn.length) {
return function curried(...args) {
if (args.length >= arity) {
return fn.apply(this, args);
}
return function (...nextArgs) {
return curried.apply(this, [...args, ...nextArgs]);
};
};
}
const join = (...parts) => parts.join("-");
const curriedJoin = curry(join, 3);
curriedJoin("a")("b")("c"); // "a-b-c"
Part 3: Compose and Pipe
Composition is about building pipelines. Take small, single-purpose functions and chain them together so the output of one becomes the input of the next.
Compose (Right-to-Left)
Mathematical function composition reads right-to-left: compose(f, g, h)(x) means f(g(h(x))).
function compose(...fns) {
if (fns.length === 0) return (x) => x;
if (fns.length === 1) return fns[0];
return fns.reduce(
(composed, fn) =>
(...args) =>
composed(fn(...args)),
);
}
Wait, that's reduce going left-to-right to build a right-to-left pipeline? Let's trace it:
Pipe (Left-to-Right)
Pipe is compose in the order humans naturally read — left-to-right. pipe(f, g, h)(x) means h(g(f(x))).
function pipe(...fns) {
if (fns.length === 0) return (x) => x;
if (fns.length === 1) return fns[0];
return fns.reduceRight(
(composed, fn) =>
(...args) =>
composed(fn(...args)),
);
}
Or, more commonly:
function pipe(...fns) {
return function (input) {
return fns.reduce((acc, fn) => fn(acc), input);
};
}
The second version is simpler and more readable. It's slightly less flexible (only takes a single initial argument instead of variadic), but for most real-world use cases that's fine.
Compose vs Pipe — When to Use Which
Both do the same thing in opposite directions. The choice is about readability:
// Compose: reads like math — f(g(h(x)))
// Good when you think "apply f to the result of g of h"
const processUser = compose(
formatOutput,
validateAge,
parseInput,
);
// Pipe: reads like steps — first parse, then validate, then format
// Good when you think "first do X, then Y, then Z"
const processUser = pipe(
parseInput,
validateAge,
formatOutput,
);
Most JavaScript developers prefer pipe because it reads top-to-bottom, matching how we read code. Compose is favored in communities with strong mathematical FP traditions (Haskell, Ramda users).
Error-Resilient Compose
In production, one function throwing in a pipeline shouldn't crash silently. Here's a variant that wraps each step:
function composeSafe(...fns) {
return function (input) {
return fns.reduceRight((acc, fn) => {
if (acc instanceof Error) return acc;
try {
return fn(acc);
} catch (err) {
return err instanceof Error ? err : new Error(String(err));
}
}, input);
};
}
This propagates the first error through the remaining pipeline without executing further steps. The caller gets back an Error instance they can inspect.
Real-World Usage in React and Frontend
Memoize: Expensive Computations
const expensiveFilter = memoize((items, query) => {
return items.filter((item) =>
item.name.toLowerCase().includes(query.toLowerCase()),
);
}, (items, query) => `${items.length}:${query}`);
function SearchResults({ items, query }) {
const filtered = expensiveFilter(items, query);
return <List items={filtered} />;
}
React's useMemo is conceptually similar but scoped to a component instance and cleared on unmount. A standalone memoize persists across renders and components — useful for selectors, formatters, and parsers.
Curry: Event Handlers and Config
const handleFieldChange = curry((setState, field, event) => {
setState((prev) => ({ ...prev, [field]: event.target.value }));
});
function Form() {
const [form, setForm] = useState({ name: "", email: "" });
return (
<>
<input onChange={handleFieldChange(setForm, "name")} />
<input onChange={handleFieldChange(setForm, "email")} />
</>
);
}
Instead of writing separate handler functions for each field or using inline arrow functions that create new references every render, currying gives you partial application that is clean and reusable.
Compose: Data Transformation Pipelines
const processApiResponse = pipe(
(res) => res.data,
(data) => data.filter((item) => item.active),
(items) => items.map((item) => ({ ...item, label: item.name.toUpperCase() })),
(items) => items.sort((a, b) => a.label.localeCompare(b.label)),
);
async function fetchUsers() {
const response = await fetch("/api/users");
const json = await response.json();
return processApiResponse(json);
}
Each step is independently testable. You can add, remove, or reorder steps without untangling nested function calls.
Implementation Checklist for Interviews
When an interviewer asks you to implement any of these, hit these points:
| What developers do | What they should do |
|---|---|
| Using args.toString() or args.join() as the memoize cache key toString on arrays drops type info: [1,2,3].toString() and ['1,2','3'].toString() both produce '1,2,3' — different inputs, same key, wrong cached result | Use JSON.stringify for primitives or a custom resolver for complex args |
| Forgetting to forward this context in memoize and curry Without forwarding this, memoized/curried methods on objects lose their context and this becomes undefined in strict mode | Always use fn.apply(this, args) or fn.call(this, ...args) |
| Relying on fn.length for functions with default parameters or rest params fn.length only counts parameters before the first default value or rest parameter — (a, b = 1, c) has length 1, not 3 | Accept an explicit arity parameter as a fallback |
| Using compose when pipe would be more readable for a data transformation Pipe reads left-to-right matching the order of execution — compose reads right-to-left which can be confusing for multi-step data processing | Use pipe for sequential data transformations, compose for mathematical-style composition |
| Creating unbounded memoize caches in long-running applications A memoized function called with unique arguments in a loop will accumulate cache entries indefinitely, eventually causing out-of-memory crashes | Use LRU eviction or WeakMap-based caching to bound memory usage |
- 1Memoize: always forward this, support custom key resolvers, and bound the cache in production
- 2Curry: check args.length >= arity (not fn.length directly), support multi-arg partial application
- 3Compose: handle 0-function and 1-function edge cases, decide left-to-right (pipe) vs right-to-left (compose)
- 4All three rely on closures — understand what is captured and when it is freed
- 5In interviews: start basic, then layer in edge cases, then discuss production concerns