Your best bet for getting VE to work on hypervisors that F5 does not explicitly support is always to use that platform's version of ovftool to convert the OVA into something that that hypervisor can consume. This works nicely for Fusion.
I did that to start, the machine I have problems with was built on fusion 3 via the ovftool, once I upgraded to fusion 4 it kills all networking. There are a bunch of errors on boot relating to smbios and vmxnet3 failures in /var/log/ltm. :(
It's likely that Fusion 4 doesn't work with versions earlier than 10.2.3 of BIG-IP VE for new deployments.
The earliest version that will work is likely to be 10.2.2-hf2 if you deployed your 10.2.x on Fusion 3 or very later revisions of Fusion 2.
Note: A 10.2.x -> 11 upgrade guide is in the BIG-IP VE manuals that detail the supported steps for upgrading on vSphere. It sounds like the upgrade guide's steps could be applied to Fusion as well.
There are three basic indicators I use to tell me that a machine has a good chance of functioning after upgrade:
1) chmand and tmm doesn't restart. This is the big indicator that something is amiss with the Hypervisor's virtual hardware presentation to the BIG-IP VE boot.
2) It goes active after UCS restore (and some time for everything to firm up)
- a UCS will attempt to restore a license if the restoration system's hostname matches the UCS archived hostname. This is quite a time saver.
3) I can see my pool members go green (or, I can ping something on a VLAN where I've attached a BIG-IP interface and self-IP) - basically, are the NICs functioning? I also verify that my BIG-IP Interface to Hypervisor VLAN mapping is correct - sometimes things get jumbled with PCI presentation of the NICs from the Hypervisor as the Hypervisor will present NICs differently between versions or when adding/removing NICs. tcpdump is great for figuring out when this happens.