Technical Forum
Ask questions. Discover Answers.
Showing results for 
Search instead for 
Did you mean: 

All attack signatures vs server/application specific ASM attack signatures



as per everyone's experience, what's best to select attack signatures for a WAF policy? I'm aware that all attack signatures might have some downside (latency, resources issue) but isn't it good to apply all signatures and block all bad kinds of traffic which might generate noise on the server?

e.g. for windows based applications, if we don't add Linux signatures, bad traffic generated with Linux-based attack vectors would be passed to the application, though the attack would be unsuccessful, it can certainly generate a lot of error codes and may have a negative impact on certain applications in terms of resources.

Isn't it WAF's job to block all kind of bad traffic irrespective of attack vector depending on backend technologies? What's everyone view here? 


Community Manager
Community Manager

Hi @SanjayP  - I'm surprised nobody else has replied to this yet, but I've asked a few colleagues to check out this thread and weigh in. 

Community Manager
Community Manager

I can see your point but one thing I'd consider is that you can potentially (actually probably) open yourself up for a lot of false positives. To the point where if an attacker knows this is the case, they could fill your logs with false positives, while burying actual attack attempts. Sure - you could filter that out but then why have that in the first place? Those are my views at least. Good topic!

@buulam /