Crawl and archive websites using spider.cloud. Browse crawled pages in a sidebar, inspect source HTML in a Monaco editor, and download individual pages or full archives.
Live: archiver.spider.cloud
- Stream-crawl websites via the Spider API and view results in real time
- Page list sidebar with title, URL, size, and status badges
- Read-only Monaco editor with HTML syntax highlighting
- Copy or download individual page source
- Export All — download every crawled page as a single combined HTML file
- Cross-app switcher to jump between all Spider Cloud tools with the current URL pre-filled
- Configurable crawl limit, return format (raw / markdown / text), and request mode (HTTP / Chrome / Smart)
- Node.js 18+
- A spider.cloud account (free tier available)
Create a .env.local file:
NEXT_PUBLIC_SUPABASE_URL=<your-supabase-url>
NEXT_PUBLIC_SUPABASE_ANON_KEY=<your-supabase-anon-key>npm run dev
# or
yarn dev
# or
pnpm dev
# or
bun devOpen http://localhost:3001 to see the result.
- Next.js 14 (App Router)
- React 18
- Monaco Editor via
@monaco-editor/react - Tailwind CSS
- Radix UI primitives (Dialog, Select, Toast)
- Supabase auth
- Vercel Analytics
| Tool | URL |
|---|---|
| Archiver | archiver.spider.cloud |
| Dead Link Checker | dead-link-checker.spider.cloud |
| A11y Checker | a11y-checker.spider.cloud |
| Knowledge Base | knowledge-base.spider.cloud |
| Perf Runner | perf-runner.spider.cloud |
| Content Translator | content-translator.spider.cloud |
| Diff Monitor | diff-monitor.spider.cloud |
| Sitemap Generator | sitemap-generator.spider.cloud |
| Link Graph | link-graph.spider.cloud |
This project is licensed under the MIT license.
