Prefix Sum Cache & Fenwick Tree for Offset Calculation
The Offset Problem
There is a data structure problem hiding inside every dynamic-height virtual list, and most developers never realize it until scrolling starts stuttering. You need two operations, and you need them fast:
- Query: "What is the scroll offset of item at index
i?" (sum of heights from 0 to i-1) - Update: "Item
iwas measured. Its height changed from estimated 50px to actual 73px."
These operations happen on every scroll event (query) and every item measurement (update). Their performance directly determines whether scrolling feels smooth or choppy.
Imagine a stack of books of different thicknesses. "What is the height of the stack up to book 500?" requires measuring all books below it. If book 300's thickness changes, you need to recalculate every sum that includes book 300. The data structure question is: how do you organize the bookshelf so both queries and updates are fast?
Approach 1: Naive Summation — O(n) Query, O(1) Update
The obvious approach: just add up all the heights every time you need an offset.
class NaiveOffsets {
private heights: number[];
constructor(count: number, estimatedHeight: number) {
this.heights = new Array(count).fill(estimatedHeight);
}
update(index: number, height: number): void {
this.heights[index] = height; // O(1)
}
// Sum heights from 0 to index-1
query(index: number): number {
let sum = 0;
for (let i = 0; i < index; i++) {
sum += this.heights[i]; // O(n)
}
return sum;
}
}
For 10,000 items, query(9999) sums 9,999 values. This runs on every scroll event during binary search for the visible range. With binary search calling query up to 14 times (log2 of 10,000), that is 14 × 10,000 = 140,000 operations per scroll event. At 60fps, this is millions of operations per second — too slow.
Approach 2: Prefix Sum Array — O(1) Query, O(n) Update
The natural next idea: pre-compute a cumulative sum array so lookups are instant:
class PrefixSumOffsets {
private heights: number[];
private prefix: number[];
constructor(count: number, estimatedHeight: number) {
this.heights = new Array(count).fill(estimatedHeight);
this.prefix = new Array(count + 1);
this.rebuild();
}
private rebuild(): void {
this.prefix[0] = 0;
for (let i = 0; i < this.heights.length; i++) {
this.prefix[i + 1] = this.prefix[i] + this.heights[i];
}
}
query(index: number): number {
return this.prefix[index]; // O(1) — direct lookup
}
update(index: number, height: number): void {
this.heights[index] = height;
this.rebuild(); // O(n) — must rebuild entire prefix array
}
totalHeight(): number {
return this.prefix[this.heights.length];
}
}
heights: [50, 73, 50, 120, 50]
prefix: [0, 50, 123, 173, 293, 343]
query(3) = prefix[3] = 173 // Offset of item 3 (in O(1))
Queries are O(1) — perfect for scroll events. But here is the catch: updating a single height requires rebuilding the entire prefix array — O(n). For 10,000 items, each ResizeObserver callback triggers a 10,000-element rebuild. If 20 items are measured in one frame, that is 20 x 10,000 = 200,000 operations. We traded one problem for another.
Approach 3: Fenwick Tree — O(log n) Query, O(log n) Update
And this is where it gets good. A Fenwick tree (also called Binary Indexed Tree) achieves the best of both worlds:
| Operation | Naive | Prefix Sum | Fenwick Tree |
|---|---|---|---|
| Query (offset at index) | O(n) | O(1) | O(log n) |
| Update (height changed) | O(1) | O(n) | O(log n) |
| Total height | O(n) | O(1) | O(log n) |
| Build | O(n) | O(n) | O(n) |
For 10,000 items: query takes ~14 operations, update takes ~14 operations. Both are consistent regardless of which index is involved.
How a Fenwick Tree Works
The trick behind Fenwick trees is beautifully clever. It stores partial sums in an array where each index is responsible for a range of elements determined by the lowest set bit of the index:
Index (1-based): 1 2 3 4 5 6 7 8
Lowest bit: 1 2 1 4 1 2 1 8
Range covered: [1] [1-2] [3] [1-4] [5] [5-6] [7] [1-8]
Index i stores the sum of lowbit(i) elements ending at i, where lowbit(i) = i & (-i).
heights: [50, 73, 50, 120, 50, 80, 50, 65]
tree[1] = 50 (covers index 1)
tree[2] = 50 + 73 = 123 (covers indices 1-2)
tree[3] = 50 (covers index 3)
tree[4] = 50+73+50+120 = 293 (covers indices 1-4)
tree[5] = 50 (covers index 5)
tree[6] = 50+80 = 130 (covers indices 5-6)
tree[7] = 50 (covers index 7)
tree[8] = 50+73+50+120+50+80+50+65 = 538 (covers indices 1-8)
Implementation
class FenwickTree {
private tree: number[];
private n: number;
constructor(values: number[]) {
this.n = values.length;
this.tree = new Array(this.n + 1).fill(0);
// Build in O(n)
for (let i = 0; i < this.n; i++) {
this.addAt(i + 1, values[i]);
}
}
// Add delta to position (1-indexed)
private addAt(i: number, delta: number): void {
for (; i <= this.n; i += i & (-i)) {
this.tree[i] += delta;
}
}
// Update height at index (0-indexed)
update(index: number, oldValue: number, newValue: number): void {
this.addAt(index + 1, newValue - oldValue);
}
// Prefix sum: sum of values[0..index-1] (0-indexed, exclusive end)
query(index: number): number {
let sum = 0;
for (let i = index; i > 0; i -= i & (-i)) {
sum += this.tree[i];
}
return sum;
}
// Total sum of all values
total(): number {
return this.query(this.n);
}
// Binary search: find largest index where prefix sum <= target
// O(log n) — does NOT call query() repeatedly
findIndex(targetOffset: number): number {
let pos = 0;
let bitMask = 1;
while (bitMask <= this.n) bitMask <<= 1;
bitMask >>= 1;
while (bitMask > 0) {
const next = pos + bitMask;
if (next <= this.n && this.tree[next] <= targetOffset) {
pos = next;
targetOffset -= this.tree[next];
}
bitMask >>= 1;
}
return pos; // 0-indexed result
}
}
Integrating with a Virtual List
class VirtualListOffsets {
private fenwick: FenwickTree;
private heights: number[];
private estimatedHeight: number;
constructor(count: number, estimatedHeight: number) {
this.estimatedHeight = estimatedHeight;
this.heights = new Array(count).fill(estimatedHeight);
this.fenwick = new FenwickTree(this.heights);
}
// Called by ResizeObserver when an item's actual height is measured
updateHeight(index: number, measuredHeight: number): void {
const oldHeight = this.heights[index];
if (oldHeight === measuredHeight) return; // No change
this.heights[index] = measuredHeight;
this.fenwick.update(index, oldHeight, measuredHeight);
}
// Get scroll offset for item at index
getOffset(index: number): number {
return this.fenwick.query(index); // O(log n)
}
// Find which item is at a given scroll position
findItemAtOffset(scrollTop: number): number {
return this.fenwick.findIndex(scrollTop); // O(log n)
}
// Total scroll height (for scrollbar)
getTotalHeight(): number {
return this.fenwick.total();
}
}
Usage in the virtual list scroll handler:
function onScroll(scrollTop: number) {
// O(log n) — find which item is at this scroll position
const startIndex = offsets.findItemAtOffset(scrollTop);
// O(log n) — get pixel offset of that item
const startOffset = offsets.getOffset(startIndex);
// Render visible items from startIndex
renderVisibleItems(startIndex, startOffset);
}
Why not a balanced BST?
A balanced binary search tree (red-black tree, AVL tree) also gives O(log n) query and update. But it requires node allocation (garbage collection pressure), pointer traversal (cache-unfriendly), and complex rebalancing logic. A Fenwick tree is a flat array — cache-friendly, zero allocations after construction, trivial implementation (~20 lines), and constant factors 3-5x smaller than tree-based alternatives. For the virtual list use case where n is fixed and only values change, Fenwick trees are the optimal choice.
Comparison of Approaches
For a list of 10,000 items at 60fps:
| Approach | Query/frame | Update/frame | Viable at 60fps? |
|---|---|---|---|
| Naive sum | ~140K ops | 0 ops | No — query too slow |
| Prefix sum | 14 ops | ~10K ops per update | Marginal — updates during measurement bursts lag |
| Fenwick tree | ~200 ops | ~14 ops per update | Yes — both operations are fast |
At 100,000 items, the Fenwick tree advantage becomes decisive: query takes ~17 operations regardless, while naive summation takes 50,000+ operations per binary search step.
The findIndex Operation: Fenwick Binary Search
Spoiler: this is the operation that matters most. Finding which item is at a given scroll offset runs on every scroll event to determine the visible range. The Fenwick tree supports an O(log n) binary search without calling query() repeatedly:
// Find the item at scroll position 15,000px
// Traditional approach: binary search calling query() → O(log²n)
// Fenwick binary search: single pass → O(log n)
findIndex(targetOffset: number): number {
let pos = 0;
// Start from highest power of 2 ≤ n
let bitMask = 1 << Math.floor(Math.log2(this.n));
while (bitMask > 0) {
const next = pos + bitMask;
if (next <= this.n && this.tree[next] <= targetOffset) {
pos = next;
targetOffset -= this.tree[next];
}
bitMask >>= 1;
}
return pos;
}
This walks down the Fenwick tree structure, at each level deciding to go left (target is in the lower half) or right (subtract and continue). It touches exactly log2(n) entries — the same as a single query.
- 1The offset calculation problem in dynamic virtual lists is: O(n) naive summation is too slow, O(1) prefix sums have O(n) updates.
- 2Fenwick trees (Binary Indexed Trees) give O(log n) for both query and update — the optimal trade-off for virtual list offsets.
- 3A Fenwick tree is a flat array using bit manipulation (i & -i) to navigate partial sums. No pointers, no allocations, cache-friendly.
- 4The findIndex operation (scroll position → item index) can be done in O(log n) using Fenwick binary descent, not O(log²n) with repeated queries.
- 5For 10,000 items: Fenwick query = ~14 ops. Naive sum = ~5,000 ops average. The gap widens with more items.
- 6Batch height updates when possible, but Fenwick trees handle individual updates efficiently enough for per-item ResizeObserver callbacks.
- 7Fenwick trees are the standard data structure for virtual list offset management in production libraries.
Q: You are building a virtual list with dynamic item heights. Scrolling is janky because offset calculation is slow. Explain how you would use a Fenwick tree to fix this.
A strong answer explains: the heights array stores each item's height, the Fenwick tree stores partial sums enabling O(log n) prefix sum queries (sum of heights 0 to i). When ResizeObserver measures a new height, update the Fenwick tree in O(log n). On scroll, use Fenwick binary descent to find the start item index in O(log n). Describe the bit manipulation: i & (-i) extracts the lowest set bit, determining each node's range. Compare to alternatives: naive O(n) sum, prefix sum O(n) update. Mention that the Fenwick tree is a flat array — no GC pressure, cache-friendly, trivial to implement.