Google recently unleashed a shiny new web crawler, “GoogleOther,” designed to give its primary search index crawler, Googlebot, a much-needed break.
According to Google Analyst Gary Illyes, the new crawler will handle non-essential tasks like research and development (R&D) crawls, allowing Googlebot to focus on its primary job of indexing the web.
The update is anticipated to help the tech giant streamline and optimize its web crawling operations.
While Illyes reassured the new crawler won’t significantly impact websites, it begs the question: What does this mean for search engine optimization (SEO) efforts?
In this blog, we’ll explore GoogleOther’s limitations, its possible effect on search engine rankings and whether or not businesses should be concerned.
A Quick Background on Web Crawlers, User Agents and Googlebot
To truly understand how GoogleOther updates affect web crawling, it’s important first to review the basics of web crawlers, Googlebot user agents and the role of Googlebot in the web crawling process.
Google Web Crawlers and User Agents
Web crawlers, also known as robots or search engine spiders, systematically discover and scan websites by following links from one page to another. Search engines use Google spiders to gather information about web pages and curate recommendations for search queries.
To identify themselves to servers, Google web crawlers use a user agent, a string of text included in the request headers sent to the server.
The Googlebot user agent then tells the server which bot requests to crawl the page. This enables website owners to monitor bot activity and limit Google crawling access if necessary.
The server responds with a status code indicating whether or not Google spiders are permitted to crawl the website.
Googlebot and Rankings
If Google crawling is allowed, Googlebot will analyze the web page, including text, images and links. These pages are ranked by relevance, with the highest-ranked pages considered the most relevant to the query.
This search results ranking is based on an algorithm considering various factors such as keywords, content and backlinks from reputable sites.
To improve a website’s ranking, many businesses hire a technical SEO consultant for their website optimization services or on page optimization services.
Where GoogleOther Comes Into Play
The web crawling process is continuous, with Googlebot visiting and re-visiting websites to ensure the Google search index is up-to-date with the latest information.
However, with billions of pages to be indexed, you can imagine how resource-intensive this task can be. Google web crawlers like Googlebot must adapt to handle the increasing amount of data efficiently.
With GoogleOther on board, Google can alleviate some of the strain on Googlebot search engine spiders by assigning non-essential tasks to the new crawler.
Splitting Responsibilities Between Googlebot & GoogleOther
GoogleOther will primarily be used by Google’s product teams for internally building the Google search index. As Illyes stated on LinkedIn:
We added a new crawler, GoogleOther to our list of crawlers that ultimately will take some strain off of Googlebot. This is a no-op change for you, but it’s interesting nonetheless I reckon.
As we optimize how and what Googlebot crawls, one thing we wanted to ensure is that Googlebot’s crawl jobs are only used internally for building the index that’s used by Search. For this we added a new crawler, GoogleOther, that will replace some of Googlebot’s other jobs like R&D crawls to free up some crawl capacity for Googlebot.”
Essentially, it will take over various tasks historically owned by Googlebot, including research and development (R & D) crawl. And by historical, here’s what we mean:
GoogleOther’s Limitations and Features
GoogleOther inherited Googlebot’s infrastructure, which means it possesses the same limitations and features when crawling web pages. This includes:
• Host Load Limitations: Subject to the same limits on how much load it can generate on a server, preventing it from overwhelming a site’s resources or causing downtime.
• Robots.txt Restrictions: Obeys the same robots.txt rules as Googlebot search engine spiders, but with a different Googlebot user agent token. This allows site owners to control which parts of their site are crawled and which are not.
• HTTP Protocol Version: Uses the same HTTP version as Googlebot, currently HTTP/1.1 and HTTP/2 (if supported by the site).
• Fetch Size Limit: Subject to the same page size limit as Googlebot, currently set at 10MB. This prevents large pages from consuming excessive resources, which could slow down the Google crawling process.
As Ilyes pointed out, GoogleOther is basically Googlebot under a different name.
What GoogleOther Means for Your SEO Strategy: Experts Weigh In
While Google has assured webmasters that the new crawler won’t have a significant impact on websites, many SEO experts are still wondering about its potential effects on site rankings
Ronnel Viloria, Lead Technical SEO Consultant at Thrive, said it is too early to determine how GoogleOther will impact SEO efforts. Given that GoogleOther is a recent addition, no case studies are available to indicate how it may influence rankings and traffic.
“Google regularly updates its algorithms and crawlers, and these changes can impact search results and website rankings. However, until more information is available about GoogleOther, it’s impossible to predict how it might impact SEO,” Viloria said.
When it comes to optimizing your SEO campaign strategy, Ronnel advised sticking with your current systems while keeping an eye on this new update. However, if your progress has been stagnant or slow, he recommended exploring new opportunities to rank well and attract more visitors to your site.
“One thing I’m sure of is that you should continue to focus on creating high-quality content relevant to your target users. This is your best bet to rank higher in Google SERPs and attract more audiences,” he added.
In addition to high-quality content, on page optimization services, such as keyword research, title and meta tags optimization, image optimization and internal linking can also significantly improve your website’s SEO performance.
These services help search engines better understand your website’s content and context, making indexing and ranking your pages for relevant queries easier.
How To Monitor GoogleOther
If you’re still feeling wary about GoogleOther, here are some steps to incorporate into your SEO campaign strategy to monitor its crawling activities:
• Review Server Logs: Regularly monitor server logs to identify GoogleOther requests. This helps you understand its crawling behavior and the pages it visits.
• Keep robots.txt File Updated: Make sure your robots.txt file is updated with special instructions for GoogleOther so you can control how it crawls your page.
• Monitor Google Search Console (GSC) Crawl Stats: Use GSC to track changes in crawl frequency, quantity, budget or number of indexed pages since GoogleOther was implemented.
• Track Website Performance: Monitor web performance indicators, such as bounce rates, load times and user engagement to spot issues that could manifest as a result of GoogleOther crawling web pages.
Adapt to GoogleOther Changes With Ease
As “GoogleOther” continues to raise questions about its potential impact on SEO, website owners and businesses can take proactive steps to stay ahead. One such step is to partner with top SEO experts like Thrive.
By partnering with Thrive Internet Marketing Agency, businesses can gain a competitive advantage and adapt quickly to any changes in the SEO landscape.
Our technical SEO consultant will help optimize your website content, build high-quality backlinks, conduct comprehensive keyword research and much more to improve your rankings.
We offer a range of website optimization services, including on page optimization services, off-page optimization services, technical SEO, local SEO and franchise SEO.
Contact Thrive today to see how our website optimization services can help you stay prepared for any potential impact from GoogleOther.
Frequently Asked Questions
HOW WILL GOOGLEOTHER DIFFERENTIATE BETWEEN ESSENTIAL AND NON-ESSENTIAL TASKS?
GoogleOther is specifically designed to take over non-essential tasks from Googlebot, such as research and development (R&D) crawls. This allows Googlebot to focus on its primary function of indexing the web for search. The differentiation is based on the internal needs of Google’s product teams and the goal to optimize web crawling operations without affecting the search index quality.
WILL GOOGLEOTHER AFFECT THE FREQUENCY AT WHICH WEBSITES ARE CRAWLED?
The introduction of GoogleOther is not expected to significantly impact how frequently websites are crawled. GoogleOther operates under the same infrastructure and limitations as Googlebot, ensuring that the load on websites remains manageable. The primary aim is to free up resources for Googlebot to improve its indexing efficiency.
HOW DOES GOOGLEOTHER’S INTRODUCTION REFLECT GOOGLE’S OVERALL STRATEGY FOR SEARCH AND INDEXING IN THE FUTURE?
GoogleOther represents Google’s ongoing commitment to optimizing its web crawling and indexing processes. By allocating non-essential tasks to GoogleOther, Googlebot can concentrate on building a more efficient and up-to-date search index. This strategy indicates a focus on improving the quality of search results and the efficiency of web crawling operations
WILL GOOGLEOTHER TARGET SPECIFIC TYPES OF CONTENT OR WEBSITES?
The crawler is designed to handle tasks that are not directly related to the primary search index, such as R&D crawls. The impact on websites is expected to be minimal, as it adheres to the same crawling policies and restrictions as Googlebot
WHAT MEASURES SHOULD WEBMASTERS TAKE TO ENSURE THEIR SITES ARE OPTIMALLY CRAWLED BY BOTH GOOGLEBOT AND GOOGLEOTHER?
Webmasters should continue to focus on creating high-quality, engaging content and ensure their sites are mobile-optimized and user-friendly. Technical SEO elements like site speed, security and structured data also play a crucial role. Monitoring server logs, keeping the robots.txt file updated and tracking website performance can help webmasters understand how GoogleOther interacts with their site and ensure optimal crawling by both Googlebot and GoogleOther