JavaScript
min read
Last update on

Next.js performance tuning: practical fixes for better Lighthouse scores

Next.js performance tuning: practical fixes for better Lighthouse scores
Table of contents

Achieving strong Lighthouse scores starts with basic optimisations, but reaching exceptional performance takes more than default settings. While Next.js offers powerful tools out of the box, real results come from deliberate decisions across rendering, asset management, and Core Web Vitals.

This blog breaks down seven high-impact areas I focused on while improving performance in a real-world project. These targeted optimisations lead to significant Lighthouse improvements,  from smart image handling and thoughtful rendering strategies to precise bundle analysis and fine-tuned font delivery.

You’ll get a practical sense of how to reduce layout shifts, cut bundle sizes, and debug Web Vitals effectively. Each technique blends Next.js capabilities with core web performance principles to help you cross the 90+ Lighthouse score threshold with confidence.

NOTE: All approaches highlighted are not router-specific and will work with both App router and page router.

1. Optimise images

Images often dominate page weight and hurt performance. By leveraging Next.js's <Image>component with modern formats, lazy loading, and responsive sizing, you can slash load times, eliminate layout shifts, and boost Lighthouse scores, without sacrificing visual quality. Here’s how to implement these optimisations effectively.

  • next/image: Auto-optimise images with built-in compression, CDN hosting, and format conversion.
  • Set dimensions: Prevent layout shifts by defining width/height matching the intrinsic aspect ratio.
  • Responsive sizes: Use sizes to serve optimally scaled images for every viewport.
  • Lazy load offscreen: Skip priority for non-critical images to speed up initial load.
  • WebP/AVIF first: Let Next.js auto-serve modern formats at quality={75} for smaller files.
  • Boost LCP: Mark hero images as priority={true} to preload key content.
<Image
  src="/example.jpg"
  alt="Example"
  width={800}
  height={400}
  sizes="(max-width: 600px) 100vw, (max-width: 1200px) 50vw, 800px" //for responsiveness
  priority={true} //to go against the default lazy loading image
  quality={} //determine what quality to load the image in
/>

      Pros:

  • Boosts Largest Contentful Paint by preloading key resources.
  • Reduces file sizes by 30-70%, supports modern formats, and simplifies implementation.
  • Eliminates layout shifts (CLS) by reserving space during load.
  • Serves optimally sized images for each device, reducing payload by 20-40% on mobile.
  • Improves initial load time by 20-50% via resource deferral.
  • Achieves 50-80% smaller files than JPEG/PNG with comparable visual quality.

     Cons:

  • Requires static imports or domain whitelisting for external images.
  • Overuse may congest initial bandwidth; limit to 1-2 images.
  • Requires manual dimension tracking for dynamic content.
  • Requires viewport breakpoint planning; improper values may overserve large images.
  • Risk of delayed loading for critical content if misconfigured (use priority strategically).
  • AVIF encoding can increase build times; test browser fallbacks.

2. SSR, SSG & ISR

Choosing between Static Site Generation (SSG) and Server-Side Rendering (SSR) has a significant impact on performance, especially regarding Lighthouse scores, page speed, and scalability.

SSG (getStaticProps)

  • Pre-renders HTML at build time - Faster load speeds.
  • CDN-cached globally - Ideal for static/mostly-static content.

// Static Site Generation (SSG):
export default function Home() {
  return <main>Static Content</main>
}

// For dynamic data fetching (SSG with data)
async function getData() {
  const res = await fetch('<https://api.example.com/data>', {
    cache: 'force-cache' //SSG
  })
  return res.json()
}

export default async function Page() {
  const data = await getData()
  return <main>{data.content}</main>
}

       Pros:

  • Near-instant page loads (improves LCP, FCP, and TTFB).
  • Can be served from a CDN for global performance.
  • No unnecessary API calls per request, reducing server load.

       Cons:

  • Not to be used when the data changes frequently and must be updated per request.
  • Not to be used when user-specific data (authentication, dashboards) is required.

SSR (getServerSideProps)

  • Generates HTML on each request - Ensures dynamic, real-time content (e.g., user-specific data).
  • No CDN caching by default - TTFB may be slower than SSG, but avoids stale content.
  • Critical for SEO-heavy dynamic pages - Product listings, authenticated dashboards, or CMS-driven content.
  • Combine with edge caching - Use Cache-Control headers to mitigate performance costs.

       Pros:

  • Runs on every request.
  • Generates HTML dynamically on the server.
  • It can slow down page loads due to server response time.
  • Best for real-time, frequently changing data (e.g., dashboards, user profiles).

       Cons:

  • Slower TTFB (Time to First Byte) due to real-time server processing.
  • Every request hits the server, increasing infrastructure costs.
  • Not cached, meaning slower repeat visits.
// Server-Side Rendering (SSR):
async function getData() {
  const res = await fetch('<https://api.example.com/data>', {
    cache: 'no-store' // SSR
  })
  return res.json()
}

export default async function Page() {
  const data = await getData()
  return <main>{data.content}</main>
}

ISR (Incremental Static Regeneration) for Dynamic Content

  • ISR allows you to update static pages without rebuilding the whole site.
  • This enables you to get SSG performance but with automatic updates.
// Incremental Static Regeneration (ISR):
async function getData() {
  const res = await fetch('<https://api.example.com/data>', {
    next: { revalidate: 3600 } // ISR: Revalidate every hour
  })
  return res.json()
}

export default async function Page() {
  const data = await getData()
  return <main>{data.content}</main>
}

       Pros:

  • Near-SSG performance with dynamic freshness (no server-side rendering overhead, faster than SSR).
  • Background regeneration updates stale pages automatically.
  • Reduced API calls vs. SSR – cached pages reuse data until revalidation triggers.
  • No full rebuilds needed for content updates (unlike traditional SSG).

       Cons:

  • Stale content briefly shows for new visitors during revalidation.
  • Updates aren’t instant (minimum 1s interval).(revalidation gap)

3. Code splitting & dynamic imports

  • Code splitting allows Next.js to split JavaScript bundles into smaller chunks and load them only when needed. This improves performance by reducing the initial page load time.

       Pros:

  • Faster initial load - Smaller JavaScript bundles reduce initial page load time.
  • Improved time to interactive (TTI) - Less JavaScript to parse/execute = quicker interactivity.
  • Reduced JS execution time - Only load necessary code for the current view.
  • Better Low-Power Device Performance - Decreased memory usage on mobile/older devices.
  • Optimised resource loading - Chunks load on-demand as users navigate your app.

      Cons:

  • Initial load complexity - Multiple network requests for split chunks may cause slight overhead in high-latency environments.
  • Hydration delay - Dynamically loaded components may briefly show a loading state, affecting perceived performance.
  • Tooling dependency - Requires proper bundler (Webpack) and framework (Next.js) support for optimal splitting.
  • Debugging overhead - More chunks can make source-map debugging slightly harder in development.

  1. Dynamic imports
    • Instead of importing everything at once, Next.js supports on-demand loading of components using next/dynamic.
    • Only loads HeavyComponent when needed instead of bundling it in the initial JavaScript file.
    • Reduces the main bundle size, improving page speed.
import dynamic from "next/dynamic";

// Import component only when needed (client-side)
const HeavyComponent = dynamic(() => import("../components/HeavyComponent"), {
  ssr: false, // Prevents server-side rendering for this component
});

export default function Page() {
  return (
    <div>
      <h1>Code Splitting with Dynamic Import</h1>
      <HeavyComponent />
    </div>
  );
}

  1. Avoid bundling large libraries
    • JavaScript libraries like moment.js, lodash, chart.js are large and should be loaded only when needed.
    • Improves page load speed as unnecessary libraries are not included upfront.
Heavy Library Alternative
moment.js date-fns,day.js
lodash Import only what you need: import { debounce } from 'lodash/debounce'
chart.js Load dynamically: dynamic(() => import('react-chartjs-2'))
react-select headlessui, radix-ui

  1. Code Splitting with import() (Dynamic Imports for Functions)
    • Next.js automatically code-splits JavaScript files based on import() statements.
    • The function is not loaded until loadHeavyFunction is called.
    • Keeps the initial bundle lightweight.
async function loadHeavyFunction() {
  const { heavyFunction } = await import("../utils/heavyFunction");
  heavyFunction();
}

4. Visualising bundle heaviness

  • While code splitting and dynamic imports help reduce load times, identifying heavy components/dependencies is crucial. Tools like Next.js’s built-in analyser or FoamTree provide interactive visualisations to pinpoint bottlenecks:
    • Using @next/bundle-analyzer provides a simple, interactive treemap chart.
  • install @next/bundle-analyzer
npm install @next/bundle-analyzer

  • Configure in next.config.js
const withBundleAnalyzer = require("@next/bundle-analyzer")({
  enabled: process.env.ANALYZE === "true",
});

module.exports = withBundleAnalyzer({});

  • Run the analyser
ANALYZE=true npm run build

  • Using Hierarchy Visualisation Tools like FoamTree:
  1. Install FoamTree (or an Alternative)
npm install @carrotsearch/foamtree
or
yarn add @carrotsearch/foamtree

  1. Generate a bundle analysis report
    • First, generate a Webpack stats file (since Next.js uses Webpack under the hood):
npx next build --profile

  • This creates .next/analyze/ with stats files.

  1. Parse & visualise with FoamTree
    • Create a utility to convert Webpack stats into a FoamTree-compatible hierarchy:
// utils/analyzeBundle.ts
import FoamTree from "@carrotsearch/foamtree";

export async function visualizeBundle() {
  // Fetch Webpack stats (generated from `next build --profile`)
  const stats = await fetch("/.next/analyze/client.json").then((res) =>
    res.json()
  );

  // Transform stats into hierarchical data for FoamTree
  const foamtreeData = {
    groups: Object.entries(stats.chunks).map(([chunkId, chunk]: [string, any]) => ({
      label: `Chunk ${chunkId} (${(chunk.size / 1024).toFixed(2)} KB)`,
      weight: chunk.size,
      groups: chunk.modules.map((module: any) => ({
        label: `${module.name} (${(module.size / 1024).toFixed(2)} KB)`,
        weight: module.size,
      })),
    })),
  };

  // Render FoamTree
  const foamtree = new FoamTree({
    id: "foamtree-visualization",
    dataObject: foamtreeData,
    rolloutDuration: 1000,
    pullbackDuration: 1000,
  });
}

  1. Create a visualisation component
// components/BundleVisualization.tsx
"use client"; // Required since FoamTree uses browser APIs

import { useEffect } from "react";
import { visualizeBundle } from "../utils/analyzeBundle";

export default function BundleVisualization() {
  useEffect(() => {
    visualizeBundle();
  }, []);

  return (
    <div>
      <h3>Bundle Heaviness Visualization</h3>
      <div
        id="foamtree-visualization"
        style={{ width: "100%", height: "600px" }}
      />
    </div>
  );
}

  1. Usage in your app
// app/analyze/page.tsx
import dynamic from "next/dynamic";

// Dynamically load FoamTree (since it's heavy and client-side only)
const BundleVisualization = dynamic(
  () => import("@/components/BundleVisualization"),
  { ssr: false }
);

export default function AnalyzePage() {
  return (
    <div>
      <h1>Performance Analysis</h1>
      <BundleVisualization />
    </div>
  );
}

       Pros:

  • Pinpoint bottlenecks - Interactive treemaps (like @next/bundle-analyzer) expose heavy dependencies at a glance.
  • Data-driven optimisation - Quantify exact bundle impact (KB/MB) of each component/library.
  • No guesswork - Replace assumptions with visual proof of costly third-party code.
  • Built into Next.js - @next/bundle-analyzer requires zero config beyond setup.

       Cons:

  • Setup overhead - FoamTree requires manual Webpack stats parsing (vs. @next/bundle-analyzer’s simplicity).
  • Dev-only utility - Tools like FoamTree can’t run in production environments.
  • Build-time snapshot - Analyses static bundles, won’t catch runtime lazy-loaded chunks.

5. Minify CSS, JS, and HTML

  1. Next.js automatically minifies CSS, JavaScript, and HTML when running next build.
  • During the build process, Next.js uses:
    • Terser - Minifies JavaScript.
    • CSSNano - Minifies CSS.
    • HTML Minifier - Optimises HTML output.
  1. Remove unused CSS (PurgeCSS)
  • Next.js automatically removes unused CSS when using Tailwind CSS or other frameworks with PostCSS.
  • PurgeCSS removes unused CSS from your stylesheets, reducing file sizes and improving page speed.
    • Reduces CSS file size - Faster load times.
    • Improves Largest Contentful Paint (LCP).
    • Eliminates render-blocking CSS.

       Pros:

  • Automatic Size Reduction - Terser (JS), CSSNano (CSS), and HTML Minifier strip 20-60% of file bloat without manual effort.
  • Built-In Performance Boost - Eliminates render-blocking resources by default in production builds.
  • Dead Code Elimination - PurgeCSS + Tailwind auto-removes unused CSS (up to 90% reduction in CSS size).
  • Improved LCP & TTI - Smaller files = faster parsing/execution (direct Lighthouse impact).

       Cons:

  • Debugging Difficulty - Minified code lacks readable variable names/line breaks (use source maps for debugging).
  • Limited Customisation - Next.js’s default minifiers can’t be configured without ejecting/modifying the build chain.
  • CSS Purge False Positives - PurgeCSS may accidentally remove dynamic classes (requires safelist configuration).

6. Font optimisation

  • Optimising fonts in Next.js helps improve performance, Lighthouse scores, and First Contentful Paint (FCP). By properly preloading and optimising fonts, you ensure that text renders quickly without layout shifts or render-blocking delays.

       Pros:

  • Faster text rendering - next/font eliminates render-blocking requests, improving FCP by 200- 500ms.
  • Zero layout shifts - Preloaded fonts with display: swap prevent CLS (Cumulative Layout Shift).
  • Reduced payload - Automatic subsetting removes unused glyphs (e.g., Latin-only fonts are 60% smaller).
  • Privacy-first - Self-hosted fonts avoid third-party tracking (vs. Google Fonts).
  • Built-In best practices - next/font handles preloading, compression, and cache headers automatically.

       Cons:

  • Setup complexity - Manual font hosting requires file management (vs. CDN simplicity).
  • Limited dynamic control - next/font doesn’t support runtime font switching without full reload.
  • FOIT risk - Missing display: swap may cause Flash of Invisible Text on slow networks.

  1. Preload fonts for faster rendering
  • Preloading tells the browser to fetch fonts early, preventing delays in rendering. It reduces Flash of Invisible Text
  • Ensures fonts load quickly and don’t delay rendering.
  • Preloading only applies to fonts used above the fold (e.g., headings).

import { Html, Head, Main, NextScript } from "next/document";

export default function Document() {
  return (
    <Html lang="en">
      <Head>
        {/* Preload Font File */}
        <link rel="preload" href="/fonts/Inter-Variable.woff2" as="font" type="font/woff2" crossOrigin="anonymous" />
      </Head>
      <body>
        <Main />
        <NextScript />
      </body>
    </Html>
  );
}

  1. Use next/font for Automatic Optimisation
  • next/font is a built-in Next.js feature that automatically optimises fonts, reducing unused styles and enabling subsetting.
  • No need for external requests (e.g., Google Fonts).
  • Loads only the required characters & styles, reducing font file size.
  • Enables font preloading automatically.
  • Improves Lighthouse scores (FCP & LCP).
import { Inter } from "next/font/google";

// Load Inter font with only required subsets
const inter = Inter({ subsets: ["latin"] });

export default function Home() {
  return (
    <div className={inter.className}>
      <h1>Optimized Google Font</h1>
      <p>Text rendered with optimised font loading.</p>
    </div>
  );
}
  1. Using next/font/local for Custom Fonts
  • If you're using self-hosted fonts, next/font/local is the best option.
  • Self-hosted fonts load faster than fetching from a CDN.
  • No external network requests - Better privacy and performance.
  • Supports font weights, styles, and display properties.
import { NextFont } from "next/dist/compiled/@next/font";
import localFont from "next/font/local";

// Load Inter font locally
const inter: NextFont = localFont({
  src: "./Inter-Variable.woff2",
  weight: "400",
  style: "normal",
  display: "swap",
});

export default function Home() {
  return (
    <div className={inter.className}>
      <h1>Self-Hosted Font Optimization</h1>
      <p>Using locally hosted fonts for better performance.</p>
    </div>
  );
}

  1. Font Display Strategies for Performance
const roboto = Inter({
  subsets: ["latin"],
  display: "swap", // Ensures text is visible immediately
});


Font Display Effect
swap Immediately renders text with a fallback font, then swaps when the custom font loads (Best for performance).
block Hides text until the custom font is loaded (Bad for FCP).
fallback Similar to swap but with a short block period (Good balance).

7. Core web vitals debugging

Targeted fixes for LCP and CLS issues are beyond generic optimisations. Isolating and resolving bottlenecks, layout shifts, and input delays with precision.

  1. Largest Contentful Paint (LCP) fixes
  • Slow-loading hero images/videos or late-discovered text blocks.
<Head>
  {/* Preload LCP image */}
  <link
    rel="preload"
    href="/hero.webp"as="image"
    imagesrcset=".../hero-800.webp 800w, .../hero-1200.webp 1200w"
    imagesizes="100vw"
	 />

  {/* Preload LCP font */}
  <link
    rel="preload"
    href="/fonts/Inter-Bold.woff2"
    as="font"
    type="font/woff2"
    crossOrigin="anonymous"/>
</Head>

       Pros:

  • 300- 500ms LCP improvement by prioritising critical resources.
  • Prevent late-discovered images with fetchpriority="high".

      Cons:

  • Over-preloading can congest bandwidth (limit to 2-3 resources).

  1. Cumulative Layout Shift (CLS) Fixes
  • Dynamic content resizing (late-loaded images, async components).
// Images with explicit dimensions
<Image
	src="/product.webp"
	width={600}
	height={400}
	alt="Product"
	style={{
    aspectRatio: '600/400',  // Modern backup
    objectFit: 'cover'
  }}
/>

       Pros:

  • Near-zero CLS with proper aspect ratio enforcement.
  • Predictable rendering for async components.

       Cons:

  • Over-reservation wastes space on mobile (use calc() for responsiveness).

On that note…

Achieving high Lighthouse scores with Next.js is completely attainable when performance is treated as a deliberate part of the development process. The optimisations in this blog, including smart image handling, rendering strategy improvements, bundle analysis, and font delivery tuning, show what becomes possible when you apply Next.js features alongside core web performance principles. 

Each adjustment contributes to a faster, more stable, and more consistent user experience. With tools like next/image, flexible rendering modes, and bundle analysers, Next.js supports a performance-first approach throughout the stack.

Looking ahead, the ecosystem continues to offer new ways to improve. Edge Functions bring faster response times and more efficient request handling. Real user monitoring introduces a more accurate view of how your app performs in the wild. React Server Components help reduce client-side work and improve overall responsiveness. Staying aligned with Next.js release updates and evolving your implementation as best practices shift will help keep performance strong over time.

Performance is about more than technical metrics. It shapes how users interact, how confident they feel using your application, and how easily your product can grow. With consistent attention and the right techniques, it's possible to build experiences that are both fast and reliable.

Measure → Optimise → Dominate


Written by
Editor
Ananya Rakhecha
Tech Advocate