Forum Discussion
Chris_Phillips
Nimbostratus
Jan 25, 2007vip for multiple squid caches
Hi,
I'm looking at implementing a number of squid caches and using a vip to present a single ip and all that. I'm keen to enable some intelligent proxy choosing so as the same url always goes via the same box, a la "super proxy" scripts... http://naragw.sharp.co.jp/sps/ . Now at the same time i'm aware there's the hash persistence profile, but i'm wondering if that's actually going to help me here. these proxies are for outbound internet access, so the number of different GET's is huge, and whilst it sounds like it's the way to head, is a persistence profile right for the job? Would it not be a better use of resources to not store persistence data a dinstead generate a simple hash of the requested uri the way the above example does? or am i missing something and it actually IS the same thing?
Thanks
Chris
4 Replies
- Chris_Phillips
Nimbostratus
any clues guys? - Al_Faller_1969
Nimbostratus
Hi Chris -
I used to use the bigip to frontend proxy servers used for outbound internet access. This idea came across my mind a few times, but I never got to implement that before I left. I was thinking of using a rule, not persistance - because as you said, the number of GETs is extremely huge.
I was thinking of using the first character of the requested domain as the deciding factor in my rule. So, for example, I would send all requests for domains starting with a-m to pool1, which has cache server1 as a higher priority device, while all requests to n-z would go to pool2, which has cache server2 as a higher priority. Obviously the other box would be in the pool as well, but at a lower priority in case of failure.
Does that make sense? Would that work for you?
Al - Al_Faller_1969
Nimbostratus
Oh, and I forgot to include other characters, like digits. Maybe the rule should look more like:
domain starts with a-m --> pool1
all else --> pool2
Al - Chris_Phillips
Nimbostratus
it's a simple approach i'm sure, not too confident how even it'd be, unless you start looking to remove common parts of fqdn's... i.e. "w" would cover maybe 75% of all domains requested, as they'd start with www. would be a lot less cpu overhead that generating md5 sums with each [HTTP::host] though, and i'm not after any real load balancing really, just some notional active / active setup that isn't wasteful.
you felt the persistence hashes also came up short? i guess that there's logic that once a domain gets sent to a box based on current generic observed LB rules, for example, it'll always go that way, and if load increases on that particular proxy, subsequent domains would be permanenetly pushed to the other box... food for thought certainly. thanks.
Recent Discussions
Related Content
DevCentral Quicklinks
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com
Discover DevCentral Connects