administrative partitions
7 Topicsconfine iRule within a partition
I have an iRule which looks like this: when HTTP_REQUEST { if {([string tolower [HTTP::uri]] starts_with "/profile")} { pool profile_3111 } else { pool ecentral_4111 } } default pool for the VIP is ecentral_4111 . pool is in offline status, but the VIP is online, because pool profile_3111 is online. That's understandable. What I don't understand, why do I get reply back, when request doesn't contain '/profile' ? I opened a support ticket and engineer replied to me, this is because another pool with the same name but in a different partition is online. This can't be a proper behavior, is it? And if it's indeed works as expected, how do I confine traffic within a partition? Thanks, Vadym364Views0likes3CommentsChange default Route Domain for a Partition - Python F5-SDK
I am trying to change the default route domain for a partition using the F5-SDK. Here's my code to create the partition and route domain from f5.bigip import ManagementRoot bigip = ManagementRoot('ipaddress', 'user', 'password') newpart = bigip.tm.sys.folders.folder.create(name='mypartition',subPath='/') newrd = bigip.tm.net.route_domains_route_domain.create(name='myrd', id='1', partition='mypartition') After running the script my partition is created and my route domain is created now I need to change the default route domain for my new partition. By default the partition gets assigned the default route domain 0. Here's what I've tried newpart.update(default_rd_id='1') If I browse the API https:/localhost/mgmt/tm/sys/folderI can't find the value to modify { kind: "tm:sys:folder:folderstate", name: "mypartition", subPath: "/", fullPath: "/mypartition", generation: 38, selfLink: "https://localhost/mgmt/tm/sys/folder/~mypartition?ver=12.1.2", deviceGroup: "none", hidden: "false", inheritedDevicegroup: "true", inheritedTrafficGroup: "true", noRefCheck: "false", trafficGroup: "/Common/traffic-group-1", trafficGroupReference: { link: "https://localhost/mgmt/tm/cm/traffic-group/~Common~traffic-group-1?ver=12.1.2" }338Views0likes0CommentsHow do I reference an administrative partition from bash?
We recently deployed administrative partitions in several guests for the first time. I know, I know. It wasn't my decision! So I used to be able to collect information like this: tmsh -q show ltm virtual > list.txt I want to be able to do the same thing from bash per administrative partition.410Views0likes2CommentsvCMP route-domain issue
Having a strange issue. F5 is logically inline between a firewall and the servers. I attempted to migrate from a virtual edition to vCMP guest and ran into a few issues. The main issue I am struggling with is that the vCMP guest, configured with partitions and route-domains is not reachable on the server facing Self-IP from the client side. Code 12.1.2 Let's say we have 2 VLANs in one parition/route-domain. VLAN 10, 192.168.10.0/24 client facing VLAN 20, 192.168.20.0/24 server facing The route-domain in question has a default route with the gateway being a layer 3 VLAN on the firewall. The servers have a default gateway of the Floating Self-IP on the F5. Virtual Edition: VLAN 10 and VLAN 20 Self-IP addresses are pingable from user networks through the firewall F5 can ping servers in VLAN 10 from VLAN 10 Self-IP Users can ping servers in VLAN 10 through the firewall vCMP Guest: VLAN 10 Self IP addresses are pingable from the user networks through the firewall VLAN 20 Self IP addresses are unresponsive F5 can ping servers in VLAN 10 from VLAN 10 Self-IP Users CANNOT ping server in VLAN 10 through the firewall bigip.conf file objects were copied from Virtual Edition partition to vCMP guest partition. All bigip_base.conf objects were created manually. 4 partitions/route-domains in total each setup similarly, all have the same issue. Per F5 instructions: - inherited VLANs from host - deleted VLANs in guest - created route-domains - created partitions with appropriate route-domain set as default for partition - re-created VLANs inside appropriate paritions Not really sure where to begin. Probably should have restarted MCPD, but didn't get a chance before rollback. Am I missing something, or could it have just been an MCPD issue?328Views0likes1CommentBIG-IP is failing to reach the default gateway.
We are facing problem in reaching the default gateway from one of the partition we create on BIG IP system (vCMP Guest on vCMP host BIG-IP5200v Chassis), we have create a vlan with /26 subnet, it has a two local ip address (one on each unit) and a floating address. The DG is a SVI on the a juniper firewall and the transit is built on a nexus 5K switch, the set-up is like; BIG-IP (vCMP Guest) >> LCAP (multiple vlans)>>> Nexus layer 2 >>LCAP (multiple vlans)>>> Firewall. Note: •We have around more than 20 vlans on the LCAP and all working good except the new vlan we have created. •I could see the mac address on firewall interface on the layer switch but I couldn’t find the BIG-IP’s Interface (New VLAN) MAC. Please advice, thank you.270Views0likes1CommentSecurity Chart Scheduler per Partition
Hello there, I got an issue trying to create Security Chart Scheduler per Partition, when I go to Security > Charts > Chart Scheduler tab, I cant change the partition of this tab, If I create the schedule there without changing the partition it creates at the default one (Common). If I go to another tab like LTM > Virtual Server, change the partition then get back to the Chart Scheduler tab, It'll create the schedule at the partition I choose at VS. At the CLI I can check it and shows the right info as much as I move the partition and list the charts, but at GUI Charts Scheduler screen it shows all of them, but with no partition, so I cant check if it's created right without going to the CLI, change from partition to partitions listing the charts one by one. And any of the charts, sends the information of the entire box (all partitions). Is this right? Is it supposed to work like this or it's not supposed to work at all? May it be a GUI bug? Did anyone saw this behavior before? root@xxx(Active)(/Common)(tmos) cd /Common/ root@xxx(Active)(/Common)(tmos) list analytics application-security scheduled-report analytics application-security scheduled-report TESTCHART1 { email-addresses { xxxx } first-time 2016-04-25:21:00:00 frequency every-24-hours include-total enabled next-time 2016-04-25:21:00:00 predefined-report-name "/Common/Top alarmed URLs" } root@xxx(Active)(/Common)(tmos) cd /Cliente1/ root@xxx(Active)(/Cliente1)(tmos) list analytics application-security scheduled-report analytics application-security scheduled-report "Principais URLs" { email-addresses { henrykrauss@gmail.com } first-time 2016-03-16:00:00:00 frequency every-24-hours include-total enabled last-sent-time 2016-04-22:00:01:07 multi-leveled-report { time-diff last-year view-by url } next-time 2016-04-23:00:00:00 partition Cliente1 smtp-config /Common/BR-CIS-REPO-01 } Platform Name: BIG-IP 4200 Version: 11.4.1 Thanks for the help.Solved431Views0likes1CommentResource management based on partitions
Hi Guys, I have a question regarding resource management in a BIGIP LTM environment: With Cisco ACE I had the ability of allocating different resources (SSL bandwidth, throughput, etc) to the single contexts in which different customers reside. Now I’m looking for something similar for the partitions on the F5 LTM, mainly for throughput rates but controlling SSL TPS would be nice as well. Is there any chance to achieve this? Regards, Otto317Views0likes1Comment