Google Trends Scraping: API Options and DIY Methods (2026)

Google Trends is one of the best free signals for:

  • rising topics
  • seasonal demand
  • geographic interest
  • related queries you can turn into SEO or product research

But “scraping Google Trends” is also where many scripts fall apart:

  • the UI is dynamic
  • endpoints are undocumented
  • rate limiting can be aggressive

This 2026 guide covers:

  1. API options (official-ish and third-party)
  2. popular unofficial libraries (and what breaks)
  3. a DIY method that fetches interest over time and related queries
  4. practical stability tips: throttling, retries, proxy rotation
Keep Trends data collection stable with a proxy layer

Google Trends is sensitive to bursty traffic. If you’re collecting lots of keywords/regions, ProxiesAPI can help by providing a stable proxy layer so throttling + retries work as intended.


First: what data do you actually need?

Most Trends projects need one or more of:

  • Interest over time (time series)
  • Interest by region (geo)
  • Related topics / related queries
  • Trending searches (daily or realtime)

Before you pick an approach, decide:

  • which countries/regions you need
  • how often you will refresh (hourly? daily?)
  • how many keywords you’ll track

Your requirements drive whether you can use a lightweight library, or need a more robust pipeline.


Google does not provide a simple, public, stable “Trends REST API” like many developers expect.

There are some Google products/APIs that can complement Trends, but they are not a drop-in replacement.

So in practice, most teams choose one of:

  • third-party Trends APIs
  • an unofficial library (which replicates the web app)
  • a DIY method against web endpoints

Option B: Third-party APIs (fastest path)

If you need reliability and don’t want to maintain scraping code, third-party APIs can be worth it.

A good third-party Trends API typically offers:

  • stable endpoints and contracts
  • higher throughput
  • support and SLAs

Downsides:

  • cost
  • vendor lock-in
  • less flexibility for experimental fields

If you’re building a product on top of Trends data, paying for an API can be cheaper than owning maintenance.


Option C: Unofficial libraries (pytrends, etc.)

The most common Python approach is an unofficial wrapper that mimics the web app.

Pros:

  • very fast to get started
  • easy to integrate in notebooks

Cons:

  • breaks when Google changes internal endpoints
  • rate limiting can still stop you
  • harder to debug when responses change format

If you use these libraries, treat them as:

  • good for prototypes
  • acceptable for low-volume usage
  • risky for high-volume production pipelines

Option D: DIY method (production-minded)

A robust DIY approach is:

  1. Call an “explore” endpoint to get tokens/widgets
  2. Use those tokens to fetch JSON data for each widget
  3. Repeat with careful throttling and retries

This is how many libraries work internally. The difference is: you’ll write it in a way that’s observable and resilient.

The stability mindset

Your scraper should:

  • sleep between requests (jitter)
  • retry transient failures with backoff
  • save raw responses for debugging
  • persist results incrementally (don’t lose progress)

And if you scale (many keywords/regions), a proxy layer can help.


A practical Python skeleton (tokens + widget fetch)

Below is a minimal pattern you can adapt.

Note: Google’s internal endpoints can change. This is intentionally written as a template:

  • keep endpoints centralized
  • keep JSON parsing defensive
  • log and store raw responses when parsing fails
import json
import random
import time
from dataclasses import dataclass
from typing import Any, Optional

import requests
from tenacity import retry, stop_after_attempt, wait_exponential, retry_if_exception_type

TIMEOUT = (10, 35)


@dataclass
class FetchResult:
    url: str
    status_code: int
    text: str


def build_session() -> requests.Session:
    s = requests.Session()

    # If you use ProxiesAPI as an HTTP proxy, wire it here.
    # Example pattern (adjust to your ProxiesAPI docs/account):
    # PROXY_URL = os.getenv("PROXIESAPI_PROXY_URL")
    # if PROXY_URL:
    #     s.proxies.update({"http": PROXY_URL, "https": PROXY_URL})

    s.headers.update({
        "Accept": "*/*",
        "Accept-Language": "en-US,en;q=0.9",
        "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36",
    })
    return s


session = build_session()


@retry(
    reraise=True,
    stop=stop_after_attempt(5),
    wait=wait_exponential(multiplier=1, min=1, max=20),
    retry=retry_if_exception_type((requests.RequestException,)),
)
def http_get(url: str, params: Optional[dict] = None) -> FetchResult:
    time.sleep(random.uniform(0.3, 0.9))
    r = session.get(url, params=params, timeout=TIMEOUT)
    r.raise_for_status()
    return FetchResult(url=r.url, status_code=r.status_code, text=r.text)


def strip_xssi_prefix(text: str) -> str:
    # Many Google endpoints prefix JSON with )]}\'\n
    return text.lstrip(")]}\'\n")


def explore(keyword: str, geo: str = "US", timeframe: str = "today 12-m") -> dict:
    url = "https://trends.google.com/trends/api/explore"

    req = {
        "comparisonItem": [{"keyword": keyword, "geo": geo, "time": timeframe}],
        "category": 0,
        "property": "",
    }

    params = {"hl": "en-US", "tz": "-480", "req": json.dumps(req)}
    res = http_get(url, params=params)

    data = json.loads(strip_xssi_prefix(res.text))
    return data


def fetch_widget(widget: dict) -> dict:
    url = "https://trends.google.com/trends/api/widgetdata/multiline"

    # Widget contains a request payload and a token
    params = {
        "hl": "en-US",
        "tz": "-480",
        "req": json.dumps(widget["request"]),
        "token": widget["token"],
    }

    res = http_get(url, params=params)
    return json.loads(strip_xssi_prefix(res.text))


def get_interest_over_time(keyword: str, geo: str = "US") -> dict:
    data = explore(keyword, geo=geo)

    widgets = data.get("widgets", [])
    multiline = None
    for w in widgets:
        if w.get("id") == "TIMESERIES" or w.get("title") == "Interest over time":
            multiline = w
            break

    if not multiline:
        raise RuntimeError("No TIMESERIES widget found (markup may have changed)")

    return fetch_widget(multiline)


if __name__ == "__main__":
    out = get_interest_over_time("credit cards", geo="US")
    print(out.keys())

What the output looks like

The response is JSON with a timeline array. You’ll typically transform it into:

  • timestamp
  • value
  • isPartial

and save as CSV or JSONL.


Google Trends often returns a “RELATED_QUERIES” widget.

The pattern is the same:

  1. explore → find widget
  2. call the widget endpoint
  3. parse defensively

Why defensively? Because the widget types can differ by:

  • geo
  • timeframe
  • keyword type

Throttling: your real “API key”

If you scrape Trends too fast, you get:

  • response format changes (challenge pages)
  • empty datasets
  • 429/503 bursts

Start conservative:

  • 1 request per second (with jitter)
  • only 1–3 retries per URL
  • a queue that can pause when error rate rises

When you might need proxies

If you’re pulling:

  • many keywords (hundreds/thousands)
  • multiple geos
  • multiple timeframes

…you’ll likely want a proxy layer so your retries don’t come from the same IP.

ProxiesAPI can provide that proxy layer while your code focuses on:

  • throttling
  • retries
  • parsing
  • exporting

Comparison table: approaches in 2026

ApproachBest forProsCons
Third-party Trends APIProduct teamsStable, higher throughputCost, vendor lock-in
Unofficial libraryPrototypes, low volumeFast startBreaks, rate limits
DIY endpointsSerious internal pipelinesFlexible, debuggableMaintenance burden

Practical checklist

  • Decide dataset (time series vs related queries vs regions)
  • Add timeouts + retries
  • Add jittered throttling
  • Store raw responses when parsing fails
  • Persist results incrementally
  • Add a proxy layer only when scaling forces it

Next upgrades

  • A job queue (Redis/RQ, Celery) for incremental refresh
  • Keyword batching and prioritization
  • Normalization and storage in a warehouse
  • Alerting on unusual spikes (your real product value)
Keep Trends data collection stable with a proxy layer

Google Trends is sensitive to bursty traffic. If you’re collecting lots of keywords/regions, ProxiesAPI can help by providing a stable proxy layer so throttling + retries work as intended.

Related guides

How to Scrape Google Search Results with Python (Without Getting Blocked)
A practical SERP scraping workflow in Python: handle consent/interstitials, parse organic results defensively, rotate IPs, backoff on blocks, and export clean results. Includes a ProxiesAPI-backed fetch layer.
guide#how to scrape google search results with python#python#serp
How to Scrape E-Commerce Websites: A Practical Guide
A practical playbook for ecommerce scraping: category discovery, pagination patterns, product detail extraction, variants, rate limits, retries, and proxy-backed fetching with ProxiesAPI.
guide#ecommerce scraping#ecommerce#web-scraping
Web Scraping with Python: The Complete 2026 Tutorial
A from-scratch, production-minded guide to web scraping in Python: requests + BeautifulSoup, pagination, retries, caching, proxies, and a reusable scraper template.
guide#web scraping python#python#web-scraping
How to Scrape LinkedIn Job Postings (Public Jobs) with Python + ProxiesAPI
Collect role, company, location, and posted date from LinkedIn public job pages (no login) using robust HTML parsing, retries, and a clean export format. Includes a real screenshot.
tutorial#python#linkedin#jobs