Bystrobot: Yandex's high‑speed crawler
What Yandex's Bystrobot is, how it works, how to speed up indexing via Yandex.Webmaster, and how Bystrobot differs from the main crawler.
Bystrobot (literally 'fast robot') is Yandex's specialized high‑speed crawler designed for fast indexing of frequently updated pages (news, blogs, forums). It ignores most technical restrictions and checks pages within minutes or hours of publication.
What is Bystrobot
Bystrobot (fast robot) is a specialized Yandex crawler designed for fast indexing of frequently updated pages (news, blogs, forums). Unlike the main crawler (which visits sites from several hours to weeks apart), Bystrobot may return to a page every few minutes. It is intended for sites where content freshness is critical.
How Bystrobot works
How Bystrobot works in practice:
- Bystrobot ignores directives in robots.txt (for speed — but it still respects noindex meta tags).
- It checks pages using HTTP headers, especially Last-Modified and Cache-Control, to determine if content has changed.
- If the page hasn't changed, Bystrobot leaves and returns less often. If it changed, the page may be sent to the index immediately.
- Bystrobot does not crawl the entire site — it focuses on individual URLs considered important for fast indexing.
How to speed up indexing via Bystrobot
You can explicitly request Yandex to accelerate crawling using the 'Recrawl pages' tool in Yandex.Webmaster.
- Go to Yandex.Webmaster → 'Indexing' → 'Recrawl pages'.
- Enter up to 10 URLs (or one) for accelerated recrawling.
- Select 'Urgent recrawl' — then Bystrobot will visit the pages within a few hours (usually 2–4 hours).
- Confirm the request.
Additional ways to attract Bystrobot:
- Use an RSS feed or XML Sitemap with the <lastmod> attribute and frequent updates.
- Add links to new pages on the homepage or sections that the main crawler often visits.
- Ensure the server responds quickly (under 0.2 seconds for news pages).
Bystrobot vs main crawler
Yandex's main crawler (usually called 'Yandex' in logs) crawls sites on a planned schedule, respects robots.txt, and introduces delays. It can scan thousands of pages but slowly. Bystrobot, by contrast:
- Is more aggressive in request frequency (may make several requests per minute).
- Typically only visits dynamic URLs (with parameters, frequently updated).
- Ignores robots.txt (but does not ignore meta robots if present).
- Is not used for large‑scale crawling — only for checking changes on individual pages.
In server logs, Bystrobot is identified by the User-Agent: 'YandexBot' (same as the main crawler) or sometimes 'YandexBot/3.0' with extra flags. There is no exact separation, but its behavior distinguishes it.
Common questions
Discuss your project?
Share your goals and website context — I will suggest a practical next step.