Forum Discussion
BIG IQ DCD
It is quite common to have a DCD in each datacentre and use zoning - you should send the BIG-IP logs to the closest DCD.
However, log destination failover is dealt with by BIG-IP ie use a pool to select the destination DCD.
Statistics failure is also dealt with by BIG-IP - the BIG-IP will send statistics to a backup DCD as configured in the same zone ( this is in AVR config )
So the DCDs act as backups in terms of failure, but it is not the case that all logs are replicated to all DCDs. If you send logs to a DCD and it fails, you have lost those logs unless you send the logs to two different DCDs using the BIG-IP publisher. Obviously that doubles the disk usage, network utilisation etc.
I was under the impression that the DCD's where a elasticsearch cluster under the hood in a 2+1 setup.
So you could send the data to one DCD and the cluster would spread the data accross 2 nodes with one copy.
Which would mean if you lost a DCD the data would be presurved, you'd just ned to get the failed unit back online to rebuild the shards and get your reslience back.
Help guide the future of your DevCentral Community!
What tools do you use to collaborate? (1min - anonymous)Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com