SEO Glossary

X-Robots-Tag

X-Robots-Tag is the HTTP-header equivalent of the robots meta tag. It works for any file type Google crawls — PDFs, images, videos, JSON APIs — not just HTML, making it the only way to deindex non-HTML resources.

What is X-Robots-Tag?

X-Robots-Tag is an HTTP response header that delivers the same indexing directives as the <meta name="robots"> tag, but at the response level rather than in HTML. Because it travels in headers, it can be applied to any file type: PDFs, images, videos, JSON APIs, plain text — not just HTML.

It was introduced by Google in 2007 to solve a specific problem: how do you tell search engines not to index a PDF or image, when those files have no place to embed a meta tag? The answer is to add the directive to the HTTP response itself.

Syntax

X-Robots-Tag: noindex, nofollow

You can also target specific user-agents:

X-Robots-Tag: googlebot: noindex
X-Robots-Tag: bingbot: noindex, nofollow
X-Robots-Tag: noindex, noarchive

When multiple X-Robots-Tag headers are sent, all directives combine. The most restrictive interpretation wins.

Supported Directives

DirectiveEffectUse Case
noindexPage will not appear in search resultsThin tag archives, internal search results
nofollowLinks on the page do not pass equityUser-generated content pages
noneShorthand for noindex, nofollowPages to fully exclude from SEO
noarchiveNo cached copy shown in SERPsTime-sensitive pricing pages
nosnippetNo description shown in SERPsPremium content previews
max-snippet:NCap description length at N charsBrand-controlled previews
noimageindexImages on page excluded from Image SearchStock photo-heavy pages
unavailable_after: [date]Drop from index after dateEvent landing pages, expiring promos

How to Implement

Nginx

location ~* \.(pdf|docx|xlsx)$ {
    add_header X-Robots-Tag "noindex, nofollow";
}

Apache (.htaccess)

<FilesMatch "\.(pdf|docx|xlsx)$">
  Header set X-Robots-Tag "noindex, nofollow"
</FilesMatch>

Node.js / Express

app.get('/private-pdf', (req, res) => {
  res.set('X-Robots-Tag', 'noindex, nofollow');
  res.sendFile('file.pdf');
});

X-Robots-Tag vs Meta Robots vs Robots.txt

MethodWorks ForPrevents Crawl?Prevents Index?
Robots.txt DisallowAny URLYesNo (can still appear)
<meta robots>HTML onlyNoYes
X-Robots-TagAny file typeNoYes
Critical gotcha: if a URL is blocked by robots.txt, Googlebot never fetches it, which means it never sees the X-Robots-Tag header. Blocking + X-Robots-Tag noindex on the same URL is contradictory — the noindex is unreachable. Use one or the other, not both.

Frequently Asked Questions

Can I use X-Robots-Tag for HTML pages?

Yes. For HTML, X-Robots-Tag and meta robots are functionally identical. Most teams pick one convention per project. X-Robots-Tag is preferred when you want to control directives at the CDN or reverse proxy layer without touching templates.

Does Google honor X-Robots-Tag from a CDN?

Yes. As long as the directive is in the response Google receives, it does not matter whether the header originated at the origin server, CDN edge, or middleware layer.

What happens if X-Robots-Tag and meta robots conflict?

Google combines all directives from both sources and applies the most restrictive interpretation. So if meta says 'index' and X-Robots-Tag says 'noindex', the page will not be indexed.

Is X-Robots-Tag case-sensitive?

The header name is case-insensitive per HTTP spec. The directive values should be lowercase as a convention, but Google parses them case-insensitively as well.

Can I use X-Robots-Tag to remove a URL from Google's index?

Yes, but only after Google recrawls the URL and sees the header. For faster removal, also use the Removals tool in Search Console. X-Robots-Tag is the permanent fix; the Removals tool is a 6-month emergency override.

Related Terms & Resources

Part of the PositiveBacklink SEO Glossary. Updated May 2026.