In order to give search bots access to individual crawling guidelines, a pure text file has to be named ‘robots.txt’ and then stored in the domain’s root directory. If, for example, crawling guidelines for the domain, example.com, are to be defined, then the robots.txt needs to be stored in the same directory as www.example.com. When accessed over the internet, this file could be found as follows: www.example.com/robots.txt. If the hosting model for the website doesn’t offer access to the server’s root directory, and instead only to a subfolder (e.g. www.example.com/user/), then implementing indexing management with a robots.txt file isn’t possible. Website operators setting up a robots.txt should use a pure text editor, like vi (Linux) or notpad.exe (Windows); when carrying out a FTP transfer, it’s also important to make sure that the file’s transferred in ASCII mode. Online, the file can be created with a robot.txt generator. Given that syntax errors can have devastating effects on a web project’s indexing, it’s recommended to test the text file prior to uploading it. Google’s search console offers a tool for this.