Scrapeman
Built for scraping engineers

The HTTP client that doesn't slow you down.

Send requests, inspect responses, replay scraping sessions. No account. No cloud. No subscription. Local-first. Always yours.

GET https://api.scrape.do/v1/scrape
Requests
GET Scrape Target
POST Auth Token
GET Product List
GET Price History
200 OK 142ms · 8.4 KB
{
  "status": "success",
  "url": "https://shop.example.com/products",
  "content": "<!DOCTYPE html>...",
  "resolvedUrl": "https://shop.example.com/products?page=1",
  "cookies": [
    { "name": "session_id", "value": "abc123" }
  ]
}
No account required No telemetry Stores data locally Free & open source
0
accounts required
100%
local storage
2MB+
response bodies
5
auth schemes
0
telemetry

Everything a scraping engineer needs.
Nothing you don't.

Built from real pain points. Every feature exists because a scraping engineer hit a wall with every other client.

Zero config

Gzip · Brotli · Deflate

Response bodies are automatically decompressed. No manual Accept-Encoding juggling — just readable content, always.

Real-time

SSE Streaming

Watch Server-Sent Events arrive in real time. Full event log with data, id, and event type — perfect for debugging AI APIs.

Persists restarts

Persistent Cookie Jar

Cookies survive restarts. Your scraping session state is saved to disk automatically — no re-login on every run.

Postman parity

Auto-Headers Engine

Content-Type, Accept, and User-Agent are set automatically based on request type. Override or disable per-request.

2MB+ bodies

Large Response Handling

2MB+ response bodies are captured and can be saved directly to disk. No more browser tab crashes on big payloads.

5 schemes

Every Auth Scheme

Basic, Bearer, API Key, OAuth2 client credentials, and AWS SigV4. Tokens are cached and auto-refreshed before expiry.

Designed around the
scraping workflow.

1

Fire your first request

Paste a URL, pick a method, hit send. Scrapeman sets Content-Type, Accept, and User-Agent automatically — exactly like a real browser would.

2

Inspect the response

Gzip, Brotli, Deflate — all decoded automatically. Large responses don't choke the UI. Save multi-MB payloads straight to disk.

3

Replay with state

Cookies persist between restarts. OAuth2 tokens are cached. Your scraping session survives a reboot — pick up exactly where you left off.

response · 200 OK · 142ms
# Request
GET https://api.scrape.do/v1/scrape?url=...
# Auto-headers (Scrapeman managed)
Accept-Encoding: gzip, br, deflate
Content-Type: application/json
Authorization: Bearer ey••••••••
# Response (auto-decompressed)
✓ 200 OK · gzip → decoded · 8.4 KB
# Cookies saved
session_id = abc123f4 (persisted)
cf_clearance = x9k2m••• (persisted)
Ready

How Scrapeman stacks up

We picked the features that actually matter for scraping workflows.

Feature Scrapeman Postman Bruno Insomnia
Local storage (no cloud) Yes No Yes No
No account required Yes No Yes No
Auto-decompress gzip/brotli Yes No No No
Persistent cookie jar Yes Yes No Yes
SSE event streaming Yes No No No
2MB+ body + save to disk Yes No No No
AWS SigV4 auth Yes Yes Yes Yes
OAuth2 client credentials Yes Yes Yes Yes
Auto-set Content-Type Yes Yes No Yes
Free forever Yes No Yes No
Open source Yes No Yes No

Start scraping
without compromise.

Free. Local-first. No account. No subscription. No data ever leaves your machine.

Download for macOS
Windows & Linux Coming soon

macOS 13.0+ · Apple Silicon & Intel · Free forever

Your data, your machine

All requests, responses, and cookies live on your disk. Nothing is uploaded anywhere.

No subscription cliff

Free forever. No trial period. No premium tier that locks your history behind a paywall.

Open source

Audit the code, contribute a feature, or fork it. Built in the open on GitHub.