Search Bot (Crawler)
A crawler of a search engine (Googlebot, YandexBot).
A search bot (crawler, spider) is an automated program of a search engine that traverses web pages, downloads their content, and follows links to discover new documents. Robot control is done via robots.txt and meta robots. Examples: Googlebot, YandexBot, Bingbot.
What is a crawler
A crawler/spider is a bot that browses pages and indexes content. Googlebot, YandexBot, Bingbot are the most well‑known. Bots work continuously, visiting sites at different frequencies.
Managing bots
- robots.txt — blocks crawling of certain sections.
- meta robots (noindex, nofollow) — page‑level control.
- X‑Robots‑Tag — for non‑HTML files.
Crawl budget
Search bots have a limited crawl budget for each site (depends on popularity, size, server speed). Prioritising pages helps the bot spend that budget on the most important content.
Common questions
Discuss your project?
Share your goals and website context — I will suggest a practical next step.