sdn
5 TopicsF5 BIG-IP Unicast VXLAN-GPE Tunnel Sample Config
Hello Everyone, I'm looking for a Unicast VXLAN-GPE Tunnel Sample Config on BIP-IP. It will be a great help if anyone can share or point to documentation. I already checked the official documentation but that is only available for the VXLAN-GPE multicast scenario but in my case, it's a unicast tunnel. Also, I want to use IPv4 as the next protocol, looks like ethernet is used by default and I don't see in the documentation on how to change to IPv4. You can configure a VXLAN Generic Protocol Extension (GPE) tunnel when you want to add fields to the VXLAN header. One of these fields is Next Protocol, with values for Ethernet, IPv4, IPv6, and Network Service Header (NSH). Thanks!57Views1like0CommentsMulti L3DSR traffic handling
Hi guys. I have question regarding Multi L3DSR using SDN license. client -> L4-1 s VIP -> L4-2 s VIP -> Server. all of topology is L3DSR, using encapsulation IPIP. this is L4-1`s configuration ltm virtual /Common/VS_10.10.10.10-80-L3DSR { destination /Common/10.10.10.10:80 ip-protocol tcp mask 255.255.255.255 pool /Common/P-10.10.10.10-80-L3DSR_check_10.10.10.10 profiles { /Common/L3DSR-TCP-Profile { } } source 0.0.0.0/0 translate-address disabled translate-port disabled } ltm profile fastl4 /Common/L3DSR-TCP-Profile { app-service none defaults-from /Common/fastL4 hardware-syn-cookie disabled idle-timeout 300 loose-close enabled pva-offload-dynamic disabled tcp-handshake-timeout 10 } ltm pool /Common/P-10.10.10.10-80-L3DSR_check_10.10.10.10 { members { /Common/20.20.20.4:80 { address 20.20.20.4 ---> this is L4-2`s self IP. } } monitor /Common/M-10.10.10.10-HTTP-80-L3DSR profiles { /Common/ipip } } ltm monitor tcp /Common/M-10.10.10.10-HTTP-80-L3DSR { adaptive disabled defaults-from /Common/tcp destination 10.10.10.10:80 interval 5 ip-dscp 0 recv none recv-disable none send none time-until-up 0 timeout 11 transparent enabled } net tunnels tunnel /Common/TEST_tunnel-1 { local-address 10.10.10.4 mode outbound profile /Common/ipip remote-address 20.20.20.4 } ltm virtual /Common/VS_10.10.10.10-80-L3DSR { destination /Common/10.10.10.10:80 ip-protocol tcp mask 255.255.255.255 pool /Common/P-10.10.10.10-80-L3DSR profiles { /Common/L3DSR-TCP-Profile { } } source 0.0.0.0/0 translate-address disabled translate-port disabled vlans { /Common/TEST_tunnel-2 } vlans-enabled } ltm pool /Common/P-10.10.10.10-80-L3DSR { members { /Common/50.50.50.100:80 { address 50.50.50.100 --> this is Real server } } monitor /Common/M-10.10.10.10-HTTP-80-L3DSR profiles { /Common/ipip } } ltm monitor tcp /Common/M-10.10.10.10-HTTP-80-L3DSR { adaptive disabled defaults-from /Common/tcp destination 10.10.10.10:80 interval 5 ip-dscp 0 recv none recv-disable none send none time-until-up 0 timeout 11 transparent enabled } net tunnels tunnel /Common/TEST_tunnel-2 { local-address 20.20.20.4 mode outbound profile /Common/ipip remote-address 10.10.10.4 } In this case, Health check is up. but regarding client traffic, L4-2 didn`t handling and have destination unreachable messages. All of L4`s gateway is L3. and this test network is private and isolated public. Is there anyone to resolve this issue? thank you.317Views0likes0CommentsHow to start with F5 SDN?
Hi guy I've a request from customer that they want to using F5 SDN with cisco ACI. And I can't find guide about this technology nor manual about What is SDN? Is it something new and not practical or complex implement? May I ask you where can I find document about F5 SDN (or just SDN Technology concept) or how to start with F5 SDN? Thank you very much369Views0likes2CommentsF5 LBaasV1 integration with Openstack Kilo and Cisco ACI
Hello, ¿has anyone tried to run the F5 LBaaSv1 for Openstack Kilo, while using Cisco ACI as SDN solution with the ml2 plugin integration? We are setting up that scenario, with an F5 physical appliance shared between all the Openstack tenants (the "under the cloud" deployment described in F5 docs) , but when trying to create a pool via Horizon GUI we get the following error "ERROR f5.oslbaasv1agent.drivers.bigip.agent_manager [-] Exception: Unsupported network type opflex. Cannot setup network." Opflex is the standard network type for Openstack with ACI, and I am not sure if it's possible to use another network type. We are asking Cisco support but meanwhile any information about a similar setup will be appreciated. Thanks.516Views0likes2Comments