Scrapeman

Everything.
Nothing extra.

Scrapeman is purpose-built for scraping workflows. No bloat, no team features, no enterprise tiers. Just the best local HTTP client for the job.

Built on undici

Scrapeman's HTTP core uses Node.js's official undici engine — the same layer used by Node's built-in fetch. Fast, spec-compliant, and battle-tested.

All HTTP methods

GET, POST, PUT, PATCH, DELETE, HEAD, OPTIONS — full method support.

Custom headers

Set any header you need, override auto-managed ones, or disable them per request.

Request body types

JSON, form-urlencoded, multipart, raw text, binary — all supported.

Query parameters

URL query builder with encode/decode. Paste a full URL and params auto-parse.

Proxy support

Route requests through HTTP/HTTPS proxies.

Nothing hidden, nothing lost

Responses are shown fully decompressed and formatted. Large responses are captured without freezing the UI.

Auto-decompression

gzip, brotli, and deflate responses are decoded automatically. No config. No encoding noise.

SSE event streaming

Server-Sent Events displayed in real time — event type, id, data, and timestamp per event.

2MB+ body handling

Large responses are captured fully but displayed up to 2MB. Save any size to disk with one click.

JSON formatting

JSON responses are pretty-printed with syntax highlighting. Collapse/expand nodes.

Response headers

Full response header table with copy support.

Timing breakdown

DNS, connect, TLS, TTFB, and total — full request lifecycle timing.

Every scheme. No friction.

Auth tokens are managed automatically. Set it once and Scrapeman handles the rest.

Basic auth

Username and password encoded automatically.

Bearer token

Set a token and it is added to every request in the collection.

API Key

Send in header or query param, your choice.

OAuth2 client credentials

Token fetched automatically, cached until expiry, refreshed before it expires. Concurrent requests share one in-flight fetch.

AWS SigV4

Request signed with your AWS credentials. Region and service configurable per request.

Session state that survives restarts

Cookies are persisted to disk on every change. Your scraping session continues exactly where you left off.

Automatic persistence

Cookies written to disk on every response — not flushed on quit.

Domain scoping

Cookies matched to domains following RFC 6265 rules.

Session cookies

Both session and persistent cookies handled correctly.

Cookie inspector

View all current cookies, domain by domain, with expiry and flags.

Clear on demand

Reset the cookie jar for a fresh session when needed.

Headers that match real browser behavior

Content-Type, Accept, and User-Agent are set automatically based on request type — just like Postman.

Content-Type inference

Set from request body type automatically. JSON body → application/json.

Accept header

Set to match expected response format.

User-Agent

Scrapeman identifies itself correctly by default. Override to any value.

Preview panel

See exactly which headers will be sent before you fire the request.

Per-header disable

Turn off individual auto-managed headers without losing the others.

Zero data ever leaves your machine

No analytics. No crash reporting. No cloud sync. No account. Your requests and responses stay on your disk.

Local SQLite storage

Request history, responses, environments, and cookies in a local database file.

No telemetry

Zero metrics, analytics, or usage data collected or transmitted.

No account required

Download and use immediately — no registration, no email, no OAuth.

No cloud sync

Nothing is backed up to a third-party server. Your data is your responsibility.

Open source

Verify the claims above by reading the source code on GitHub.

Ready to try it?

Free. Local. No account. Download and run in 30 seconds.

Download Scrapeman