Forum Discussion
Is blocking all HTTP-HEAD requesta a bad idea?
It is a bad idea.
In most cases you should not block HEAD requests. HEAD is crucial for determining the metadata such as the status and 'freshness' of the URL without using an expensive GET request which will actually retrieve the content.
Imagine your website has a URL which is a PDF file containing annual report of your company to shareholders/investors. - this PDF file can be quite huge (let's say 10Mb is not unusual). Imagine a client has already downloaded that file and has it in its cache. If the same client attempts to access the same URL again the HEAD request will only return the metadata (e.g. file size, and last modified date). Since the file has not been modified the client will not be issuing a GET request to pull a massive 10Mb file again.
If you block HEAD requests the client will have to issue a GET request to download the resources from your website to check their size and freshness thus costing your website bandwidth and slowing down access for everyone else.
There is a reason why crawlers (e.g. Google Search Engine) are issuing HEAD requests instead of GET - they are just determining the status of the content without downloading it. If you interfere with this by returning the same metadata for all URLS (that is my understanding of what you are proposing) you will screw up all the search results in Google and other search engines, all caches in corporate proxy servers who might be accessing your website...
Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com