In the ever-evolving world of search engine optimization (SEO), staying up to date with the latest updates and guidance from Google is crucial. Recently, Google made significant updates to its official Googlebot documentation, specifically regarding the verification process and visits from IPs associated with These updates have caused some confusion among publishers, leading to misconceptions about bot activity and legitimate visits from Google. In this comprehensive guide, we will delve into the details of these updates and provide you with the essential information you need to understand and navigate Googlebot verification.

Understanding Googlebot and User-Triggered Fetchers

Googlebot is Google’s search crawler, responsible for indexing web pages and gathering information to display in search results. However, there are different types of Google bots, including special-case crawlers and user-triggered fetchers. The recent updates to Google’s documentation aim to clarify the distinction between these bots and shed light on their behavior.


Googlebot, as the primary search crawler, is responsible for discovering and indexing web pages. It continuously crawls the web, following links from one page to another, and collects information about the content and structure of each page. Publishers typically expect visits from Googlebot as part of the regular crawling process.

Special-Case Crawlers

Special-case crawlers refer to bots with specific functions, such as fetching and processing feeds or verifying websites for Google Search Console. These crawlers operate differently from Googlebot and are triggered by specific user actions or requests. It’s important to note that these special-case crawlers may ignore the rules specified in the robots.txt file.

The following are examples of special-case crawlers:

  1. Feedfetcher: This crawler is responsible for crawling RSS or Atom feeds for Google Podcasts, Google News, and PubSubHubbub.
  2. Google Publisher Center: Fetches and processes feeds that publishers explicitly supplied through the Google Publisher Center for use in Google News landing pages.
  3. Google Read Aloud: Fetches and reads out web pages using text-to-speech (TTS) upon user request.
  4. Google Site Verifier: Fetches Search Console verification tokens upon user request for website verification purposes.

These special-case crawlers serve specific functions based on user-triggered requests, and their behavior may deviate from the standard rules defined for Googlebot.

Verifying Googlebot and IPs

One area that has caused confusion among publishers is the verification of Googlebot and visits from IPs associated with Google’s recent updates provide much-needed clarity on this matter.

Verification Process

Google recommends verifying Googlebot and its associated IP addresses using the reverse DNS mask. The domain associated with Googlebot’s IP addresses should be either,, or Verifying the domain ensures that the bot activity is indeed from Google and not from any other source.

User-Triggered Fetchers

User-triggered fetchers, such as the Google Site Verifier tool, can generate bot activity from IPs associated with In the past, publishers mistakenly assumed that this activity was spam or malicious bots pretending to be Google. However, the updated documentation confirms that these fetchers are legitimate and initiated by user requests.

When a user triggers a fetch using a tool like Google Site Verifier, the subsequent bot activity will ignore the rules specified in the robots.txt file. It’s important for publishers to understand that blocking the IP ranges associated with may hinder the functionality of these user-triggered fetchers and any related tools or products.

To ensure a smooth experience and avoid unnecessary blocking, publishers should familiarize themselves with the IP ranges used by user-triggered fetchers. Google provides a user-triggered-fetchers.json object that contains the IP ranges for these fetchers.

Retiring of Mobile Apps Android Crawler

In addition to the updates regarding Googlebot and user-triggered fetchers, Google has officially retired its Mobile Apps Android crawler. This crawler, identified by the user agent token AdsBot-Google-Mobile-Apps, was primarily responsible for checking the ad quality on Android app pages. However, it no longer serves this purpose and is now obsolete.

Publishers should be aware of this retirement and update their understanding of Google’s current crawler landscape accordingly. By staying informed about changes like this, publishers can ensure they are optimizing their websites for the most relevant and active Google crawlers.


Staying on top of Google’s updates and guidelines is crucial for maintaining a strong SEO strategy. The recent updates to Googlebot verification documentation provide valuable insights into the different types of bots and their behaviors. By understanding the distinction between Googlebot and user-triggered fetchers, publishers can make informed decisions about their website’s accessibility and ensure they are not inadvertently blocking legitimate bot activity.

Remember to regularly review Google’s documentation and stay informed about any changes or updates. By doing so, you can adapt your SEO strategy and ensure your website is well-prepared for Google’s crawling and indexing processes.