Forum Discussion
Ansible Error: An exception occurred during task execution
- Feb 24, 2022
Problem solved! At least for us. In short, the module in our playbook needs to be executed on the Big-IP itself which uses Python 2.7 and causes the "split()" error. In most cases this can be resolved with "connection: local" or "delegate_to: localhost", as it is with all the F5 Ansible modules. In our case the solution was a bit different, you can check the GitHub issue for explanation.
It is running Python Version 3.8 and BIG IP Version 14.1.2.6.
Sanitized Playbook here
anything in the github issue mkratochvil mentioned helpful to you?
The problem is our business require traffic to be end to end encrypted for that I am using performanceL4 and after that I am not able to host maintenence page on f5 using iFiles as if I use http profile with perfomanceL4 then it breaks the connection. Is there a way that can be used to host a maintenence page on f5 using iFile.
may this be usable?
// config [root@ve13a:Active:In Sync] config tmsh list ltm virtual bar ltm virtual bar { destination 172.28.24.10:443 mask 255.255.255.255 pool foo profiles { fastL4 { } } rules { qux } source 0.0.0.0/0 source-address-translation { type automap } translate-address enabled translate-port enabled vs-index 13 } [root@ve13a:Active:In Sync] config tmsh list ltm pool foo ltm pool foo { members { 200.200.200.101:443 { address 200.200.200.101 session monitor-enabled state down } } monitor fake } [root@ve13a:Active:In Sync] config tmsh list ltm rule qux ltm rule qux { when CLIENT_ACCEPTED { if { [active_members [LB::server pool]] < 1 } { virtual sorrypage } } } [root@ve13a:Active:In Sync] config tmsh list ltm virtual sorrypage ltm virtual sorrypage { destination 0.0.0.0:443 ip-protocol tcp mask any profiles { clientssl { context clientside } http { } tcp { } } rules { sorrypage_rule } source 0.0.0.0/0 translate-address disabled translate-port enabled vlans-enabled vs-index 14 } [root@ve13a:Active:In Sync] config tmsh list ltm rule sorrypage_rule ltm rule sorrypage_rule { when HTTP_REQUEST { HTTP::respond 200 content "this is sorry page\n" noserver } } // test [root@centos1 ~] curl -ik https://172.28.24.10 HTTP/1.0 200 OK Connection: Keep-Alive Content-Length: 19 this is sorry page
Thanks, I see you used when HTTP_REQUEST event in your iRule. The problem is if use this event, than I must have to use HTTP profile with VS which will intern not trigger maintenance page without doing ssl bridging.
The second option I can utilize is hosting maintenance page on an external server and route traffic to that server in case the VS goes down. Not sure if it is possible without SSL bridging.?
I see you used when HTTP_REQUEST event in your iRule. The problem is if use this event that I must have to use HTTP profile with VS which will intern not trigger maintenance page without doing ssl bridging.
there are 2 virtual servers, aren't there? the http profile is in the internal virtual server (i.e. not the external one which is facing users).
- nitass_89166Jan 04, 2018
Noctilucent
The problem is our business require traffic to be end to end encrypted for that I am using performanceL4 and after that I am not able to host maintenence page on f5 using iFiles as if I use http profile with perfomanceL4 then it breaks the connection. Is there a way that can be used to host a maintenence page on f5 using iFile.
may this be usable?
// config [root@ve13a:Active:In Sync] config tmsh list ltm virtual bar ltm virtual bar { destination 172.28.24.10:443 mask 255.255.255.255 pool foo profiles { fastL4 { } } rules { qux } source 0.0.0.0/0 source-address-translation { type automap } translate-address enabled translate-port enabled vs-index 13 } [root@ve13a:Active:In Sync] config tmsh list ltm pool foo ltm pool foo { members { 200.200.200.101:443 { address 200.200.200.101 session monitor-enabled state down } } monitor fake } [root@ve13a:Active:In Sync] config tmsh list ltm rule qux ltm rule qux { when CLIENT_ACCEPTED { if { [active_members [LB::server pool]] < 1 } { virtual sorrypage } } } [root@ve13a:Active:In Sync] config tmsh list ltm virtual sorrypage ltm virtual sorrypage { destination 0.0.0.0:443 ip-protocol tcp mask any profiles { clientssl { context clientside } http { } tcp { } } rules { sorrypage_rule } source 0.0.0.0/0 translate-address disabled translate-port enabled vlans-enabled vs-index 14 } [root@ve13a:Active:In Sync] config tmsh list ltm rule sorrypage_rule ltm rule sorrypage_rule { when HTTP_REQUEST { HTTP::respond 200 content "this is sorry page\n" noserver } } // test [root@centos1 ~] curl -ik https://172.28.24.10 HTTP/1.0 200 OK Connection: Keep-Alive Content-Length: 19 this is sorry page
Thanks, I see you used when HTTP_REQUEST event in your iRule. The problem is if use this event, than I must have to use HTTP profile with VS which will intern not trigger maintenance page without doing ssl bridging.
The second option I can utilize is hosting maintenance page on an external server and route traffic to that server in case the VS goes down. Not sure if it is possible without SSL bridging.?
- nitass_89166Jan 04, 2018
Noctilucent
I see you used when HTTP_REQUEST event in your iRule. The problem is if use this event that I must have to use HTTP profile with VS which will intern not trigger maintenance page without doing ssl bridging.
there are 2 virtual servers, aren't there? the http profile is in the internal virtual server (i.e. not the external one which is facing users).
- JoshBarrowFeb 25, 2022
Cirrus
going to be trying this out today! i'll keep this updated!
Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com