Jump to content

Robots.txt Allow Disallow


Ben224

Recommended Posts

Hello,

I am running version 5.1.1. And I am currently working on my overall search engine discoverability.

I note from looking at my robots.txt analysis in google webmaster tools that I have some crawl errors and I don't get it. I am using the robots.txt file that cubecart generates automatically as I assumed that this was the most appropriate configuration for the site?

Analysis results:

Line 1: Sitemap: sitemap.xml.gz

Invalid sitemap URL detected; syntax not understood

Line 2: Sitemap: sitemap.xml

Invalid sitemap URL detected; syntax not understood

Line 4: Disallow: cache/

No user-agent specified

Line 5: Disallow: images/

No user-agent specified

The disallow cache makes sense as pages change frequently. But I am wondering what the reasoning is behind the blocking of images? Also I cant figure out why the sitemap syntax is not understood. The path to the file is good as I can access the sitemap via my browser?

Can anyone throw some light on this?

Thanks in advance.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...