Skip to content

Data Fetching Patterns

expert17 min read

The Waterfall Problem Nobody Talks About

Open your browser DevTools, go to the Network tab, and watch your app load. See those requests that start one after another, each waiting for the previous to finish? That is a waterfall. And it is the single biggest performance killer in most frontend apps.

A three-request waterfall on a 200ms latency connection costs 600ms before any data reaches the screen. The same three requests in parallel? 200ms. You just tripled your loading speed by changing the fetching pattern, not the backend.

Mental Model

Think of cooking dinner. Fetch-on-render is cooking sequentially: boil water, wait until done, then chop vegetables, wait until done, then heat the pan. Fetch-then-render is a sous chef who preps everything before you enter the kitchen. Render-as-you-fetch is the most efficient: you start boiling water, start chopping, and start heating the pan simultaneously -- each dish appears on the table as soon as it is ready, not after everything finishes.

Pattern 1: Fetch-on-Render

The most common pattern. Components fetch their own data when they mount.

"use client";

import { useState, useEffect } from "react";

function CourseList() {
  const [courses, setCourses] = useState<Course[]>([]);
  const [loading, setLoading] = useState(true);

  useEffect(() => {
    fetch("/api/courses")
      .then((res) => res.json())
      .then((data) => {
        setCourses(data);
        setLoading(false);
      });
  }, []);

  if (loading) return <Skeleton />;

  return (
    <div>
      {courses.map((course) => (
        <CourseCard key={course.id} course={course} />
      ))}
    </div>
  );
}

The problem: if CourseCard also fetches data (author info, progress), that fetch only starts AFTER the course list renders. This creates a waterfall:

Request 1: GET /courses          |████████████|
Request 2: GET /courses/1/author              |████████████|
Request 3: GET /courses/1/progress                         |████████████|
                                 0ms    200ms   400ms   600ms

Each request waits for the previous component to render before it can start. The user stares at a spinner for 600ms instead of 200ms.

Quiz
A parent component fetches a list, then each child fetches details. On a 150ms latency connection with 10 items, what is the minimum time before all data is loaded?

Pattern 2: Fetch-then-Render

Fetch all data before rendering any of the page. Popular with loaders in React Router and Next.js.

async function CourseDashboardPage() {
  const [courses, progress, recommendations] = await Promise.all([
    getCourses(),
    getUserProgress(),
    getRecommendations(),
  ]);

  return (
    <DashboardLayout>
      <ProgressSummary progress={progress} />
      <CourseGrid courses={courses} />
      <RecommendationCarousel items={recommendations} />
    </DashboardLayout>
  );
}

All three requests fire in parallel via Promise.all. The page renders after ALL data is available.

Strengths:

  • Zero waterfalls -- all data loads in parallel
  • Simple mental model -- data is available when components render
  • Works naturally with Server Components

Weakness:

  • All-or-nothing: the page is blank until the slowest request finishes. If recommendations take 2 seconds but courses take 200ms, the user waits 2 seconds to see anything.
GET /courses           |████|
GET /progress          |██████████|
GET /recommendations   |████████████████████████████████|
Page renders:                                           |████|
                       0ms        500ms       1000ms    2000ms

The user sees nothing for 2 seconds even though courses were ready in 200ms.

Pattern 3: Render-as-You-Fetch (Suspense)

The best of both worlds. Start all fetches immediately, but render each section as soon as ITS data arrives.

import { Suspense } from "react";

async function CourseDashboardPage() {
  const coursesPromise = getCourses();
  const progressPromise = getUserProgress();
  const recommendationsPromise = getRecommendations();

  return (
    <DashboardLayout>
      <Suspense fallback={<ProgressSummarySkeleton />}>
        <ProgressSummary progressPromise={progressPromise} />
      </Suspense>
      <Suspense fallback={<CourseGridSkeleton />}>
        <CourseGrid coursesPromise={coursesPromise} />
      </Suspense>
      <Suspense fallback={<RecommendationSkeleton />}>
        <RecommendationCarousel recommendationsPromise={recommendationsPromise} />
      </Suspense>
    </DashboardLayout>
  );
}
async function CourseGrid({ coursesPromise }: { coursesPromise: Promise<Course[]> }) {
  const courses = await coursesPromise;

  return (
    <div className="grid grid-cols-3 gap-6">
      {courses.map((course) => (
        <CourseCard key={course.id} course={course} />
      ))}
    </div>
  );
}

Now the timeline looks like this:

GET /courses           |████|
GET /progress          |██████████|
GET /recommendations   |████████████████████████████████|
CourseGrid renders:         |████|
ProgressSummary renders:              |████|
Recommendations renders:                                |████|
                       0ms        500ms       1000ms    2000ms

Courses appear at 200ms. Progress at 500ms. Recommendations at 2 seconds. The user sees progressive content, not a blank page.

Quiz
What is the key difference between fetch-then-render with Promise.all and render-as-you-fetch with Suspense?

Parallel Fetching: Eliminating Waterfalls

The simplest optimization: start independent fetches at the same time.

async function getCoursePage(courseId: string) {
  const [course, topics, reviews] = await Promise.all([
    getCourse(courseId),
    getTopics(courseId),
    getReviews(courseId),
  ]);

  return { course, topics, reviews };
}
Promise.all fails fast

If any promise rejects, Promise.all rejects immediately and discards the other results. Use Promise.allSettled when you want partial results even if some requests fail. For a dashboard where recommendations failing should not hide the course list, Promise.allSettled is the right choice.

async function DashboardData() {
  const results = await Promise.allSettled([
    getCourses(),
    getProgress(),
    getRecommendations(),
  ]);

  return {
    courses: results[0].status === "fulfilled" ? results[0].value : [],
    progress: results[1].status === "fulfilled" ? results[1].value : null,
    recommendations: results[2].status === "fulfilled" ? results[2].value : [],
  };
}

Waterfall Detection

How do you find waterfalls? Two approaches:

1. Network tab analysis: Filter by XHR/Fetch. If requests start in a staircase pattern, you have a waterfall.

2. Code analysis: Look for fetches inside useEffect of child components that depend on parent data. The pattern parent fetches list → child fetches details is always a waterfall.

Execution Trace
Waterfall pattern
Parent mounts → useEffect fires → fetch('/courses') → response → setState → children mount → child useEffect fires → fetch('/courses/1/author')
Each fetch waits for the previous component to render
Parallel pattern
Page starts → Promise.all([getCourses(), getAuthors()]) → both requests in flight → both resolve → render with all data
No component mounting delay between fetches
Streaming pattern
Page starts → start all fetches → render shell → Suspense boundary 1 resolves → stream HTML → Suspense boundary 2 resolves → stream HTML
Progressive rendering as data arrives

Prefetching: Loading Data Before the User Asks

The fastest request is one the user never waits for.

Prefetch on Hover

function CourseLink({ courseId, children }: { courseId: string; children: ReactNode }) {
  const router = useRouter();

  function handleMouseEnter() {
    router.prefetch(`/courses/${courseId}`);
  }

  return (
    <Link href={`/courses/${courseId}`} onMouseEnter={handleMouseEnter}>
      {children}
    </Link>
  );
}

Next.js automatically prefetches links that are visible in the viewport. But for dynamically generated links or important navigation targets, explicit prefetch on hover gives you a 100-300ms head start (average hover-to-click time).

Prefetch on Intent

Even smarter: prefetch when the user shows intent, not just hover.

function SearchResults({ results }: { results: Course[] }) {
  return (
    <ul>
      {results.map((course) => (
        <li
          key={course.id}
          onMouseDown={() => {
            prefetchCourseData(course.id);
          }}
        >
          <Link href={`/courses/${course.id}`}>{course.title}</Link>
        </li>
      ))}
    </ul>
  );
}

onMouseDown fires ~100ms before onClick (the time between pressing and releasing the mouse button). That is 100ms of free prefetching.

Quiz
What is the advantage of prefetching on mousedown instead of click?

Stale-While-Revalidate

Show cached (potentially stale) data immediately, then revalidate in the background.

User visits page:
  1. Show cached data instantly (stale but fast)
  2. Fire background request to get fresh data
  3. When fresh data arrives, update the UI silently
  4. Cache the fresh data for next visit

This pattern is the foundation of TanStack Query (React Query) and SWR:

"use client";

import { useQuery } from "@tanstack/react-query";

function CourseList() {
  const { data: courses, isLoading, isStale } = useQuery({
    queryKey: ["courses"],
    queryFn: () => fetch("/api/courses").then((r) => r.json()),
    staleTime: 5 * 60 * 1000,
  });

  if (isLoading) return <Skeleton />;

  return (
    <div className={isStale ? "opacity-75" : ""}>
      {courses.map((course: Course) => (
        <CourseCard key={course.id} course={course} />
      ))}
    </div>
  );
}

The user never sees a loading spinner on revisit. Data appears instantly. If it is stale, it updates silently in the background.

Common Trap

Stale-while-revalidate can cause subtle bugs with stale data. If a user completes a quiz and navigates back to the dashboard, the cached progress might show the old value. Either invalidate the cache on mutation (queryClient.invalidateQueries(["progress"])) or use optimistic updates that immediately reflect the expected state.

Choosing the Right Pattern

PatternInitial LoadSubsequent LoadsComplexityBest For
Fetch-on-render (useEffect)Waterfall riskFull reloadLowSimple pages with one data source
Fetch-then-render (loader)No waterfall, all-or-nothingFull reloadLowServer Components, data-heavy pages
Render-as-you-fetch (Suspense)No waterfall, progressiveStreamingMediumComplex dashboards with mixed latency
Stale-while-revalidateInstant (cached) or slow (first)Instant from cacheMediumFrequently revisited pages
Prefetch on intentNear-instant (if prefetched)Near-instantLowNavigation-heavy apps with predictable paths
Key Rules
  1. 1Default to Server Components with parallel fetching (Promise.all) for new pages
  2. 2Use Suspense boundaries to stream sections with different latencies independently
  3. 3Prefetch on hover/intent for navigation-heavy apps -- every 100ms saved is perceived speed
  4. 4Use stale-while-revalidate for data that updates frequently but does not need real-time accuracy
  5. 5Never let a child component fetch data that the parent could have fetched in parallel
What developers doWhat they should do
Using useEffect for data fetching in Next.js App Router
useEffect fetching in App Router ignores the server rendering pipeline. Data fetches on the client create waterfalls and hurt Core Web Vitals. Server Components fetch on the server with zero client JavaScript.
Using Server Components with async/await or Suspense for data fetching
Using Promise.all for requests where one failure should not block the others
Promise.all rejects on the first failure. A slow recommendations API should not prevent courses from rendering. Promise.allSettled returns all results regardless of individual failures.
Using Promise.allSettled for independent requests with graceful degradation
Caching everything with stale-while-revalidate without invalidation strategy
Without invalidation, users see stale data after performing actions. Completing a quiz should immediately update the progress display, not show old cached progress.
Invalidating relevant caches on mutations and using short staleTime for frequently changing data