Hey guys, I ran into this very problem a couple years back... Persistence worked just great at the WideIP level for a while... Then it seemed we were getting more and more complaints about bouncing from one dc to the next.. After some investigation, yep you guessed it, load balancing of LDNSs.... And the problem with Persistence at the WIP level is it's based on the 32 bit address of the LDNS..
I actually put a feature request in a while back so you could specify the CIDR block... ie perist on first 8 bits, 16, etc etc This was back in ver 9.3.x I believe... I don't believe it has been resolved.. has it? If you see this as useful, could you please +1 the feature request?? I'll try to dig up the ID... or maybe someone internal will see this post and dig it up for us ;)
Soo your solution is to use Static Persist at the Pool level, where you CAN specify the CIDR.. You specify the CIDR in--> System--->Configuration---->Global Traffic--->General--->Static Persist CIDR (IPv4) & (IPV6).. I do wish this was not a global setting, and specified at the pool level..
Now that will absolutely work for you, but you need to understand how it works and the trade-offs...
Static Persist will do an MD5 hash on the pool member makeup and the LDNS then return the SAME value every time to a user, unless the pool makeup changes, or that member is unavailable.. There is no persistence table with this logic..
With all of that in mind you may not see the most even split of traffic... In my experience it was never to bad... 60/40ish and the business was very happy to have a fix.. but it has the potential to be worse.. So you need to take that into account and share the info.. but if persistence is what you need it seems to be the best option without using topology record logic..