eBay Price Tracker: How to Monitor Prices Automatically (Alerts, History, and Data Model)
If you’ve ever tried to “just track eBay prices”, you quickly learn it’s not a single feature.
A real eBay price tracker needs:
- a repeatable way to collect prices (search pages + item pages)
- normalization (same product sold by multiple sellers, different conditions)
- history storage (so you can chart and compute drops)
- alerting (email/Slack/Telegram)
- reliability (rate limits, blocking, timeouts)
This guide gives you a practical blueprint you can implement in a weekend.
We’ll cover:
- What data to scrape from eBay
- A sane data model for history
- A Python reference crawler (requests + BeautifulSoup)
- Alert logic (price drop thresholds)
- How ProxiesAPI fits when you scale
Price trackers fail when requests fail. ProxiesAPI helps keep your eBay monitoring stable as your watchlist grows and your crawl schedule gets tighter.
What to track (the minimum viable dataset)
There are two core surfaces:
- Search results (broad coverage): item title + price + shipping + condition + URL
- Item detail page (source of truth): exact price, available quantity, seller, condition, sometimes “sold” status
For most trackers, this dataset is enough:
listing_id(stable identifier from URL)titleconditionpriceshipping_pricetotal_price(computed)currencyselleritem_urlobserved_at
Why total_price matters
On eBay, a $10 item with $12 shipping is not a bargain. Track total.
Normalize reality (variants and duplicates)
eBay price tracking isn’t “one product = one price”. Common problems:
- Many sellers list the same product → multiple prices
- “New” vs “Used” conditions
- Different bundle sizes
- Auctions vs Buy It Now
So the trick is to model:
- Watch: the thing the user cares about (a query or a target product)
- Listing: an individual eBay listing
- Observation: a point-in-time snapshot of a listing’s price
That gives you history and alerts without fighting duplicates.
A simple schema (SQLite)
SQLite is perfect for a solo founder price tracker.
-- watches: the user-defined thing to monitor
CREATE TABLE IF NOT EXISTS watches (
id INTEGER PRIMARY KEY AUTOINCREMENT,
query TEXT NOT NULL,
min_price REAL,
max_price REAL,
condition TEXT,
created_at TEXT NOT NULL
);
-- listings: discovered listings (stable identity)
CREATE TABLE IF NOT EXISTS listings (
id INTEGER PRIMARY KEY AUTOINCREMENT,
ebay_listing_id TEXT NOT NULL UNIQUE,
title TEXT,
item_url TEXT,
condition TEXT,
seller TEXT
);
-- observations: price history over time
CREATE TABLE IF NOT EXISTS observations (
id INTEGER PRIMARY KEY AUTOINCREMENT,
listing_id INTEGER NOT NULL,
observed_at TEXT NOT NULL,
price REAL,
shipping REAL,
currency TEXT,
total REAL,
FOREIGN KEY(listing_id) REFERENCES listings(id)
);
CREATE INDEX IF NOT EXISTS idx_obs_listing_time ON observations(listing_id, observed_at);
Reference crawler (Python)
We’ll scrape eBay search results because it’s the fastest way to cover many listings.
Setup
pip install requests beautifulsoup4 lxml
Fetch wrapper (with optional ProxiesAPI proxy)
import time
import random
from typing import Optional
import requests
TIMEOUT = (10, 30)
HEADERS = {
"User-Agent": (
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) "
"AppleWebKit/537.36 (KHTML, like Gecko) "
"Chrome/123.0.0.0 Safari/537.36"
),
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
"Accept-Language": "en-US,en;q=0.9",
}
session = requests.Session()
def fetch(url: str, *, proxy_url: Optional[str] = None) -> str:
proxies = None
if proxy_url:
proxies = {"http": proxy_url, "https": proxy_url}
r = session.get(url, headers=HEADERS, timeout=TIMEOUT, proxies=proxies)
# eBay may rate-limit; in production you’ll add retries/backoff
if r.status_code in (429, 503, 500, 502, 504):
time.sleep(5 + random.random() * 3)
r.raise_for_status()
return r.text
Parse search results
eBay search results are typically list items with predictable sub-elements:
- title
- price
- shipping
- link
We’ll parse with multiple fallbacks.
import re
from bs4 import BeautifulSoup
from urllib.parse import urlparse, parse_qs
def parse_money(text: str) -> float | None:
if not text:
return None
# supports "US $12.34" or "$12.34"
m = re.search(r"(\d+[\d,]*\.?\d*)", text.replace(",", ""))
return float(m.group(1)) if m else None
def extract_listing_id(url: str) -> str | None:
# Many eBay item URLs contain /itm/<id>
m = re.search(r"/itm/(?:[^/]+/)?(\d{9,})", url)
return m.group(1) if m else None
def parse_ebay_search(html: str) -> list[dict]:
soup = BeautifulSoup(html, "lxml")
out = []
for item in soup.select("li.s-item"):
a = item.select_one("a.s-item__link")
if not a:
continue
url = a.get("href")
title = (item.select_one("div.s-item__title") or item.select_one("h3.s-item__title"))
title_text = title.get_text(" ", strip=True) if title else None
price_el = item.select_one("span.s-item__price")
ship_el = item.select_one("span.s-item__shipping")
price = parse_money(price_el.get_text(" ", strip=True) if price_el else "")
shipping = parse_money(ship_el.get_text(" ", strip=True) if ship_el else "")
listing_id = extract_listing_id(url or "")
if not listing_id:
continue
out.append({
"ebay_listing_id": listing_id,
"title": title_text,
"item_url": url,
"price": price,
"shipping": shipping or 0.0,
"currency": "USD", # best-effort default
})
return out
Crawl a query with pagination
eBay uses _pgn for page number in many cases.
from urllib.parse import urlencode
BASE = "https://www.ebay.com/sch/i.html"
def build_search_url(query: str, page: int = 1) -> str:
qs = urlencode({"_nkw": query, "_pgn": str(page)})
return f"{BASE}?{qs}"
def crawl_query(query: str, pages: int = 3, proxy_url: str | None = None) -> list[dict]:
rows = []
for p in range(1, pages + 1):
url = build_search_url(query, page=p)
html = fetch(url, proxy_url=proxy_url)
batch = parse_ebay_search(html)
print("page", p, "items", len(batch))
rows.extend(batch)
time.sleep(2 + random.random())
if not batch:
break
return rows
Storing observations + alerting
Now the business logic:
- insert new listings
- append price observations
- compute “price dropped” events
Pseudo-logic:
- For each listing, find the last observation
- If new total < last total by X% or ₹/$ threshold → alert
Alert example rules:
- drop ≥ 10%
- drop ≥ $20
Comparison: tracking approaches (what to use when)
- Scrape search results: broad monitoring, great for deals
- Scrape item pages: accurate, slower (one request per listing)
- Use eBay APIs (if available/allowed): often cleaner, but may be limited or require approval
Most practical trackers do both:
- discover via search
- verify details for candidates
Where ProxiesAPI helps
At small scale (a few queries, once per day), you might not need proxies.
But when you scale to:
- many queries
- frequent refreshes (every 15–60 minutes)
- many geographies
…your main enemy becomes request failure rate.
ProxiesAPI helps by giving you a more reliable network layer so:
- 429/503 failures drop
- retries succeed more often
- your alerts don’t miss a price drop
Practical checklist
- Track total price (item + shipping)
- Store observations with timestamps
- Normalize condition + listing type
- Cache pages and avoid re-fetching too often
- Add backoff + jitter + retries
- Use ProxiesAPI when your watchlist grows
Next upgrades
- build a small dashboard (Next.js) with charts per watch
- add “sold listings” tracking for price discovery
- dedupe similar listings using normalized titles + embeddings
- schedule crawls via cron and send alerts via Telegram
Price trackers fail when requests fail. ProxiesAPI helps keep your eBay monitoring stable as your watchlist grows and your crawl schedule gets tighter.