Puppeteer Stealth: How to Avoid Bot Detection (Without Getting Your IP Burned)

If you’re searching for puppeteer stealth, you’re probably here for one reason:

Your script works locally… and gets blocked the moment you run it at scale.

This post is the practical, 2026 version of stealth:

  • what “stealth” actually means (and what it doesn’t)
  • the most common fingerprint mistakes
  • how to configure Puppeteer to look less like a bot
  • how to avoid burning your IPs
  • when to stop fighting and switch to a different approach
Pair Puppeteer with a stable proxy layer

Stealth isn’t just plugins — it’s consistent network behavior, retries, and not reusing burned IPs. ProxiesAPI helps you rotate IPs and keep crawl coverage stable.


First principles: what sites detect

Modern bot defenses don’t rely on one signal. They blend:

  1. Network signals
    • IP reputation (datacenter vs residential)
    • request rate and burstiness
    • TLS fingerprint / JA3-like signals
  2. Browser fingerprint
    • headless indicators
    • WebGL, canvas, audio
    • fonts, screen size, locale
  3. Behavior
    • instant interactions
    • no scrolling
    • unrealistic navigation
  4. Consistency
    • same IP + new fingerprint every request
    • timezone mismatch with IP region

So “stealth” isn’t a single flag.

It’s a system that keeps your traffic plausible and consistent.


The stealth spectrum (don’t overpay)

Not every target needs a full stealth stack.

Here’s a good mental model:

TargetTypical defensesRecommended approach
Docs/blogslowrequests + HTML parsing
Small e-commrate limitsrequests + proxies + retries
JS-heavy appsdynamic renderingPlaywright/Puppeteer (headful when needed)
High-value marketplacesadvancedbrowser + residential proxies + strict pacing

A lot of people jump straight to headless browser + stealth plugins.

Often the cheaper fix is simply:

  • slow down
  • rotate IPs
  • keep sessions consistent

Core Puppeteer setup (baseline)

Use a recent Chromium, set realistic viewport + locale, and control headless mode.

// package.json deps:
//   npm i puppeteer

import puppeteer from "puppeteer";

const UA =
  "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) " +
  "AppleWebKit/537.36 (KHTML, like Gecko) " +
  "Chrome/123.0.0.0 Safari/537.36";

export async function launchBrowser({ headless = true } = {}) {
  const browser = await puppeteer.launch({
    headless,
    args: [
      "--no-sandbox",
      "--disable-setuid-sandbox",
      "--disable-dev-shm-usage",
      "--lang=en-US,en",
    ],
  });

  const page = await browser.newPage();
  await page.setUserAgent(UA);
  await page.setViewport({ width: 1366, height: 768 });
  await page.setExtraHTTPHeaders({ "Accept-Language": "en-US,en;q=0.9" });

  return { browser, page };
}

Why this matters

  • Small viewport sizes and weird languages are common automation giveaways.
  • Headless is fine for many sites, but some targets still treat headless differently.

Stealth plugins: useful, not magical

The popular option is puppeteer-extra + stealth plugin.

// npm i puppeteer-extra puppeteer-extra-plugin-stealth

import puppeteer from "puppeteer-extra";
import StealthPlugin from "puppeteer-extra-plugin-stealth";

puppeteer.use(StealthPlugin());

export async function launchStealth({ headless = true } = {}) {
  const browser = await puppeteer.launch({
    headless,
    args: ["--no-sandbox", "--disable-setuid-sandbox", "--lang=en-US,en"],
  });

  const page = await browser.newPage();
  await page.setViewport({ width: 1366, height: 768 });
  return { browser, page };
}

Where it helps:

  • removes a bunch of obvious navigator.webdriver signals
  • patches some headless-specific quirks

Where it doesn’t:

  • bad IP reputation
  • aggressive rate limiting
  • behavior that looks automated

The biggest stealth mistake: “new fingerprint every request”

People rotate everything:

  • user agent
  • viewport
  • timezone
  • language

…on every request.

That often looks more suspicious.

A better model:

  • Create a session profile and reuse it for a while.
  • Rotate IP when you need to, but keep the browser fingerprint stable per session.

Session profile example

export function makeSessionProfile(seed = 1) {
  // Keep it deterministic for a session.
  const viewports = [
    { width: 1366, height: 768 },
    { width: 1440, height: 900 },
    { width: 1536, height: 864 },
  ];

  const vp = viewports[seed % viewports.length];

  return {
    viewport: vp,
    locale: "en-US",
    timezone: "America/New_York",
  };
}

Behavior: add pacing and real navigation

If your script:

  • loads a page
  • instantly clicks five things
  • extracts content
  • closes

…that’s bot behavior.

Do this instead:

  • add jitter
  • scroll
  • wait for network to go idle
function sleep(ms) {
  return new Promise((r) => setTimeout(r, ms));
}

function jitter(baseMs, spreadMs = 300) {
  return baseMs + Math.floor(Math.random() * spreadMs);
}

export async function humanize(page) {
  await sleep(jitter(700));
  await page.mouse.move(200, 200);
  await sleep(jitter(400));
  await page.evaluate(() => window.scrollBy(0, 400));
  await sleep(jitter(800));
}

Proxies: how to not burn your IPs

If you only take one thing from this article, make it this:

Stealth without a proxy strategy just burns IPs more slowly.

Practical proxy rules

  • don’t hammer one IP
  • don’t use a single IP across multiple domains simultaneously
  • rotate when you see block signals (403/429, captcha pages)
  • keep a cooldown list for “burned” IPs

Using a proxy with Puppeteer

Puppeteer supports a proxy server via launch args:

const browser = await puppeteer.launch({
  headless: true,
  args: [
    "--no-sandbox",
    "--disable-setuid-sandbox",
    "--proxy-server=http://USERNAME:PASSWORD@HOST:PORT",
  ],
});

If you’re using ProxiesAPI as your proxy layer, the principle is the same:

  • keep requests stable
  • rotate IPs when blocked
  • avoid bursty traffic patterns

Detection signals you should log

To debug stealth, log these per request:

  • status code
  • final URL (redirects)
  • response size
  • presence of keywords like captcha, verify you are human
  • screenshot on failure

In Puppeteer:

page.on("response", async (res) => {
  const url = res.url();
  const status = res.status();
  if (status >= 400) {
    console.log("HTTP", status, url);
  }
});

Comparison table: common stealth tactics

TacticHelps?When to useRisk
Stealth pluginsometimesgeneric bot checkscan break sites
Headful modeoftenheadless-blocked targetsslower
Residential proxiesbig helphigh-value targetscost
Slow pacinghuge helpalmost alwaysslower throughput
Randomize everythingusually norarelylooks inconsistent

When to stop using Puppeteer (and do something else)

Use Puppeteer when you need rendering.

But if your target has usable underlying APIs or structured data:

  • scrape the JSON endpoints
  • parse JSON-LD
  • use server-rendered HTML

Browsers are expensive. They should be your last resort, not your default.


Where ProxiesAPI fits (honestly)

ProxiesAPI won’t make a badly-behaved bot “undetectable.”

But it helps keep your crawl stable by:

  • rotating IPs
  • reducing repeated failures
  • letting you pace requests without losing coverage

Combine it with realistic sessions, pacing, and failure logging — that’s real puppeteer stealth in 2026.

Pair Puppeteer with a stable proxy layer

Stealth isn’t just plugins — it’s consistent network behavior, retries, and not reusing burned IPs. ProxiesAPI helps you rotate IPs and keep crawl coverage stable.

Related guides

Playwright vs Selenium vs Puppeteer for Web Scraping (2026): Speed, Stealth, and When to Use Each
A practical 2026 decision guide comparing Playwright, Selenium, and Puppeteer for scraping: performance, detection risk, ecosystem, and real-world architecture patterns.
seo#playwright#selenium#puppeteer
Data Scraping for E-Commerce: Price Monitoring + Competitive Intel (2026 Playbook)
A tactical workflow for building a price-monitoring pipeline: targets, cadence, dedupe, alerts, and how to keep the crawl stable in 2026.
seo#data scraping for e commerce#ecommerce#price-monitoring
How to Scrape Data Without Getting Blocked: A Practical Playbook
A no-fluff anti-blocking guide: rate limits, fingerprints, retries/backoff, header hygiene, caching, and when proxy rotation (ProxiesAPI) is the simplest fix. Includes comparison tables and checklists.
guide#web-scraping#anti-block#proxies
Screen Scraping vs API (2026): When to Use Which (Cost, Reliability, Time-to-Data)
A practical decision framework for choosing screen scraping vs APIs: cost, reliability, time-to-data, maintenance burden, and common failure modes. Includes real examples and a comparison table.
guide#screen scraping vs api#web-scraping#automation