Scaling up has traditionally been fairly straightforward. Typically, you would just upgrade hardware but that usually required a challenging forklift upgrade. But F5 made it amazingly easy to scale up on the fly with the VIPRION. If you needed to increase capacity, you could simply just add a blade instead. Everything was taken care of: the cluster would install all the necessary software and configuration and voila, you just increased your capacity. When vCMP was introduced (which allowed for virtual instances of BIG-IP on the chassis), we extended that same concept. No automation required. It provides the ultimate ADC easy button.
If you provisioned your guest instances to span blades and you added a blade:
Voila, you could magically double your capacity without any additional configuration.
With flexible provisioning (v11.4.0), you could carve up resources even further and scale individual guest’s compute/memory horizontally as well.
As guests BIG-IPs are full-fledged instances of BIG-IPs, you can scale out the Device Service Cluster with additional guests as well.
Between hardware and software options, ScaleN provides a dizzying array of power and flexibility.
As discussed earlier, scaling out can involve a similar capacity planning strategy as the old Active/Active solution, which involved looking to see what Applications made sense to migrate to the new pair. The only difference is you’re migrating them to a Traffic Group instead of another pair. If it’s just one application that is experiencing heavy load, it might be worth dedicating to its own Traffic Group.
If DNS LB is an option, it might make sense to simply duplicate the same configurations on the second Traffic Group.
Traffic Groups can even be used as a unit of Multi-tenancy, where you place one customer on one traffic group and another on its own traffic group.
Traffic Groups can also be stacked, which btw, was the use case the “Load Aware” failover method was designed for. If stacking, capacity planning will need to take into consideration the individual load of each traffic group. BIG-IPs report the throughput and CPU load of each VIP so one can initially add up the metrics for each Virtual service in the traffic group to help determine the resource allocation (or load factor). Upon failover, traffic groups will be balanced across available cluster nodes based on total loads to ensure optimal performance.
As mentioned above, you could also scale out using a “Spanned VIP” with ECMP. This type of deployment is best suited for situations where state isn’t important or is taken care of in the application (i.e. not online banking/gambling app where an ADC failover could disrupt a critical transaction). Some newer applications are getting better at preserving state on their own so this may be a more feasible solution going forward.