Verify if your website is blocking important AI crawlers like GPTBot, Claude, and Google Gemini.
Trusted by 1,000+ marketing teams
0
AI Platforms
0+
Prompts Tested
0+
Brands Analyzed
Your robots.txt is the gatekeeper between your content and AI crawlers. Get it right to maximize visibility and protect what matters.
Decide exactly which AI crawlers—GPTBot, Claude, Gemini—can index your content and which ones to block.
Prevent proprietary pages, staging environments, and internal docs from being scraped by AI training bots.
Allowing the right crawlers helps your brand appear in AI-powered answers on ChatGPT, Perplexity, and more.
As AI search grows, a well-configured robots.txt ensures you're optimized for the next generation of discovery.
Input your domain name. We'll automatically find and fetch your robots.txt file.
Our agent parses your permissions against the most popular AI crawlers.
See exactly which AI models can access your content and which are blocked.
Eco-system
Create a standardized file to help AI models citation and understanding of your documentation.
Check your brand's ranking across ChatGPT, Claude, and Perplexity with a free report.
We're building more tools to help you win in the age of AI search.
Allowing AI crawlers can help your content appear in AI search answers (like Perplexity or ChatGPT Search). Blocking them prevents your content from being used to train models, but might reduce your visibility in their real-time results.
Yes, 'Disallow: /' tells a specific bot (or all bots if under User-agent: *) not to visit any page on your site.
GPTBot is the crawler used to crawl data for training OpenAI's models. ChatGPT-User is the agent used when a user explicitly asks ChatGPT to browse a specific webpage.
You can add a specific block for that bot with 'Allow: /'. For example: User-agent: GPTBot Allow: /