
SergeBot
SergeBot is the crawler that powers Serge's Agent Readiness Scanner. It checks for the existence of agent-facing files and machine-readable metadata on your domain.
User-Agent string
Every request from SergeBot identifies itself with this exact User-Agent:
SergeBot/1.0 (+https://serge.ai/bot; agent-readiness-scanner)What SergeBot does
SergeBot only runs when a user initiates a scan. It is not an autonomous crawler and does not spider your site. Each scan makes a small, fixed set of requests to well-known paths:
| Resource | Purpose |
|---|---|
/llms.txt | LLM product description |
/llms-full.txt | Extended LLM documentation |
/openapi.json | OpenAPI specification |
/.well-known/agent.json | A2A agent card |
/.well-known/agents.json | Agent directory card |
/robots.txt | Crawler permissions |
/sitemap.xml | Site structure |
/ | Homepage (structured data, JSON-LD) |
/docs, /api, /developers | Developer hub detection |
/pricing | Pricing page detection |
SergeBot also queries external registries (MCP Registry, PulseMCP, npm) for SDK and MCP server presence. These requests do not touch your infrastructure.
What SergeBot does not do
Rate limits
| Limit | Value |
|---|---|
| Max requests per domain per scan | ~20 |
| Max concurrent requests per domain | 6 |
| Per-request timeout | 8 seconds |
| Scan completes in | < 30 seconds |
SergeBot respects robots.txt directives and Crawl-delay values.
Controlling access
User-agent: SergeBot
Allow: /User-agent: SergeBot
Disallow: /If your site blocks SergeBot, scan results will show checks as inconclusive rather than failed. Blocking the scanner also means AI agents using the same paths will likely face the same restrictions.
Data handling
Contact
Questions about SergeBot, false positives, or access issues: