Ok.
Here's another way to look at that traffic:
With the busiest traffic being 52 million page clicks (we will assume this actually refers to unique HTTP requests), then that works out to be an average of 602 req/sec assuming around the clock clicking, or an average of 1805 req/sec if concentrated into an 8 hour busy period, or an average of 14444 req/sec if all 52 million clicks were concentrated into 1 hour. Any of these levels of traffic are well below the upper limits of v9 software on most of our platforms.
I'm not sure where we were going with this, but it doesn't sound like you've got a whole lot to be concerned about other than writing a really, really crappy rule (which by no means have you done). The idea to making on efficient rule is not always how much it costs the current connection, but how much it costs the entire system which might be processing lots of different sites. Then you obviously want to make it as efficient as possible.
It sounds like you are the right person to make the decision as you know the traffic patterns the best and also know the pros and cons to doing it either way. I was just trying to point out the subtle differences.
To finish, your understanding of HTTP_REQUEST is correct. It is evaluated on every client request whether it's the first request on a new connection or a subsequent request on an existing connection.
On an off topic side note: you could use "event HTTP_REQUEST disable" if you actually wanted to turn off this rule event for subsequent requests.
Good luck and let us know how things work out.