What Is The Purpose Of Robots.Txt In Technical SEO?

What is the purpose of robots.txt in technical SEO? 

During the course of an SEO campaign, you may find that your website has many pages and that it is difficult to crawl. Luckily, there are a few simple things you can do to make it easier for search engines to crawl your site and index your pages. 

(Searching in Google “SEO for Squarespace“? Contact us today!)

Search Engine Optimization


There are hundreds of different crawlers and spiders that can crawl your website. If you want to make it more difficult for certain bots to access your site, you can use the User-agent directive to instruct them not to crawl particular areas of your website. 

Crawl budget:

A crawl budget is a set of instructions that allows you to control how much a search engine can crawl your website in a given period of time. It is a good idea to use this directive when your site has a lot of query string parameters that filter and sort data (like t-shirts in multiple colors and sizes). 


A Crawl delay instructs crawlers to wait a certain amount of time before they can continue crawling the next page on your website. This can be a useful option for large websites, as it limits how often search engines can crawl your site. 


A Crawl-delay instructs crawlers to wait a certain number of seconds before they can continue crawling to the next page. This can be a useful option for sites that are experiencing a high number of crawling requests or are experiencing a low amount of traffic from these bots. 


A Disallow command instructs a bot not to crawl a specific file, URL, or directory. This can be a good option when you are trying to prevent image theft or bandwidth abuse. 


A wildcard can be used to apply a command to multiple User-agents and match their URL patterns. This is particularly helpful when you want to block access to a specific subfolder within your site. 

Using a wildcard can also be helpful when you want to allow a crawler to access a specific subfolder, even if it is located under a disallowed directory. 

Meta robots tags:

A meta robots tag is a special type of header that you can add to the head> section of your web pages. These tags are common in SEO marketing and they tell search engine bots what to do when they visit your site. 

There are a few different kinds of meta robots tags: the content=”, name=” and index. Noindex will remove the page from search engine results and also prevent it from following backlinks on the page. This is a common practice for thank you pages, for example. 

You can add the directive to the head> section of your HTML or use an x-robots-tag to do the same thing but within the headers of an HTTP response. The x-robots-tag can be especially useful if you have images on your site that you want to protect from spammers.