Special Google IP Address List

Official ranges for Special Crawlers and User-Triggered Fetchers. Distinguish between index-crawling Googlebot and utility-based fetchers.

IP Geolocation Finder
Enter any IPv4 or IPv6 address to find its geographic location and network details.

Why Special Google IPs Matter for SEO

Understanding the difference between standard Googlebot and special fetchers is vital for diagnosing site access issues.

Robots.txt vs. User Intent

Standard Googlebot always respects robots.txt. However, User-triggered fetchers (like the Site Verifier or feed fetchers) ignore these rules because a human user specifically requested the data. Blocking these IPs won't help your crawl budget, but it will break product functionality.

AdsBot Performance

If you block the Special-case crawlers, you may unintentionally block AdsBot.

This crawler checks landing page quality for Google Ads. Blocking it can lead to lower quality scores or even ad disapproval.

Site Verification

When you add a new site to Google Search Console and click "Verify", Google uses a User-triggered fetcher to check for your HTML tag or file. If your firewall blocks these IP ranges, verification will fail.

Is Gemini a User Triggered Fetcher?

Gemini uses the Google-Extended user agent and does not have dedicated IP ranges. Google-Extended crawls originate from Google's common crawler infrastructure.

This means you can allow or block Gemini crawlers using our Googlebot IP range recommendations.

If you want to ensure content from your website is crawlable, indexable and eligible for Google search results but you don't want Gemini crawling or using your data then you need to block Google-Extended in your robots.txt file:

User-agent: Google-Extended
Allow: / # To allow Google-Extended
Disallow: / # To block Google-Extended

Blocking Google Extended from crawling isn't a ranking factor and won't affect your search visibility.