Robots.txt Generator

Create SEO-friendly robots.txt files to control crawler access

Specify which crawler to target (* for all, Googlebot, Bingbot, etc.)

Enter paths to block (one per line). Leave empty to allow all.

Enter paths to explicitly allow (one per line)

Optional: Add your sitemap URL for better indexing

Optional: Set delay between requests (0 = no delay)

Generated Robots.txt


Bulk Robots.txt Validator

Validate multiple robots.txt URLs at once

Enter one website URL per line (max 10)

Validation Results

Robots.txt Generator Help

Create SEO-friendly robots.txt files to control crawler access for Google, Bing, and other search engine bots. A properly configured robots.txt file helps search engines understand which pages to crawl and index.

What is robots.txt?

The robots.txt file is a text file placed in your website's root directory that tells search engine crawlers which pages or sections of your site they should or shouldn't access. It's part of the Robots Exclusion Protocol (REP).

How to Use This Tool

  1. Choose a User-Agent (* for all crawlers, or specific like Googlebot)
  2. Enter paths to disallow (e.g., /admin/, /private/)
  3. Optionally add paths to explicitly allow
  4. Add your sitemap URL for better indexing
  5. Set a crawl delay if needed (optional)
  6. Click "Generate Robots.txt"
  7. Copy or download the generated file
  8. Upload it to your website's root directory

Common User-Agents

* - All crawlers

Googlebot - Google's web crawler

Googlebot-Image - Google's image crawler

Bingbot - Bing's web crawler

Slurp - Yahoo's crawler

DuckDuckBot - DuckDuckGo's crawler

Common Patterns

Block Admin Area

User-agent: *
Disallow: /admin/
Disallow: /wp-admin/

Allow All with Sitemap

User-agent: *
Disallow:

Sitemap: https://example.com/sitemap.xml

Block Everything

User-agent: *
Disallow: /

Best Practices

  • Always include a sitemap reference
  • Place robots.txt in your root directory (example.com/robots.txt)
  • Test your robots.txt using Google Search Console
  • Use specific paths rather than blocking entire sections when possible
  • Remember: robots.txt doesn't guarantee privacy (use proper authentication)
  • Keep it simple - over-complicated rules can confuse crawlers

Common Paths to Block

  • /admin/ - Administrative areas
  • /wp-admin/ - WordPress admin (except admin-ajax.php)
  • /cgi-bin/ - CGI scripts
  • /tmp/ - Temporary files
  • /private/ - Private directories
  • /*.pdf$ - PDF files (if you don't want them indexed)

⚠️ Important Note

Blocking pages in robots.txt doesn't prevent them from appearing in search results if they're linked from other sites. For true privacy, use password protection or noindex meta tags.

💡 Pro Tip

After uploading your robots.txt, verify it's accessible at yourdomain.com/robots.txt and test it using Google Search Console's robots.txt Tester tool.

Usage Limits

Plan Daily Limit Best For