What is Robots.txt?
Robots.txt tells search engine crawlers which pages to access or avoid. It’s located atyoursite.com/robots.txt.
Default Configuration
Lindo.ai generates a default robots.txt:Common Configurations
Allow All (Default)
Block Specific Pages
Block All Crawlers
Block Specific Bots
Allow Only Google
Configuring in Lindo.ai
Enable Manual Control
- Open website editor
- Go to Settings → SEO
- Find Robots.txt section
- Toggle “Manual Robots.txt”
- Edit the content
- Save changes
Best Practices
- Always include sitemap reference
- Test changes before publishing
- Don’t block CSS/JS files
- Use specific paths, not wildcards
Common Directives
| Directive | Purpose |
|---|---|
| User-agent | Specifies which crawler |
| Allow | Permits access to path |
| Disallow | Blocks access to path |
| Sitemap | Points to sitemap location |
| Crawl-delay | Slows crawler (not all support) |
Testing Robots.txt
Google Search Console
- Go to Search Console
- Select your property
- Use robots.txt Tester
- Check for errors
Manual Testing
Visityoursite.com/robots.txt to verify content.
Important Notes
- Robots.txt is advisory, not enforced
- Sensitive content needs authentication
- Changes may take time to reflect
- Don’t use for security purposes

