edge
12 TopicsProtect multi-cloud and Edge Generative AI applications with F5 Distributed Cloud
F5 Distributed Cloud capabilities allows customers to use a single platform for connectivity, application delivery and security of GenAI applications in any cloud location and at the Edge, with a consistent and simplified operational model, a game changer for streamlined operational experience for DevOps, NetOps and SecOps.882Views3likes0CommentsJourney to the Multi-Cloud Challenges
Introduction The proliferation of internet-based applications, digital transformations accelerated by the pandemic, an increase in multi-cloud adoption, and the rise of the distributed cloud paradigm allbringnew business opportunitiesas well as new operational challenges. According to Propeller Insights Survey; 75% of all organizations are deploying apps in multiple clouds. 63% of those organizations are usingthree or more clouds. And 56% are findingit difficult to manage workloads across different cloud providers, citing challenges with security, reliability, and connectivity. Below I outline some of the common challenges F5 has seen and illustrate how F5 Distributed Cloud is able to address those challenges.For the purpose of the following examples I am using this demo architecture. Challenge #1: IP Conflict and IP exhaustion As organizations accelerate their digital transformation, they begin to experience significant network growth and changes. As their adoption of multiple public clouds and edge providers expands, they begin to encounter challenges with IP overlap and IP exhaustion. Typically, thesechallenges seldom happen on the Internet as IP addresses are centrally managed. However, this challenge is common for non-Internet traffic becauseorganizations use private/reserved IP ranges (RFC1918) within their networks and any organization is free to use any private ranges they want. This presents a increasingly common problem as networks expand into public clouds, with the ease of infrastructure bootstrapping using automation, the needs of multi-cloud networking, and finally mergers and acquisitions. The F5 Distributed Cloud canhelp organizations overcome IP conflict and IP exhaustion challenges by provisioning multiple apps with a single IP address. How to Provision Multiple Apps with a Single IP Address (~8min) Challenge #2: Easy consumable application services via service catalogue A multi-cloudparadigm causes applications to be very distributed. We often seeapplications running on multiple on-prem data centers, at the edge, and inpublic cloud infrastructure. Making those applicationseasily available ofteninvolves many infrastructureand security control changes - not an easy task. This includes common tasks such as service advertisement, updates to network routing and switching, changing firewall rules, and provisioning DNS.In this demo, wedemonstrate how to seamlessly provision and advertise services, to and from public cloud providers anddata centers. This capability enables an organization to seamlessly provision services and create consumable service catalogues. How to Seamlessly Provision Services to/from the Cloud Edge (~4min) Challenge #3: Operational (Day-2) Complexities Often users have multiple discreet tools managing their infrastructure and each toolprovides their owndashboard for telemetry, visibility, and observability. Users need access to all these tools into a single consistent view so they can tell exactly what is happening in their environments. F5 Distributed Cloud Console provides a 'single pane of glass' for telemetry, visibility and observability providing operational efficiency for Day-2 operations designed to reduce total cost of ownership. Get a Single Pane of Glass on Telemetry, Visibility and Observability (~7min) Challenge #4: Cloud Vendor lock-in impede business agility. Most organizations do not want their cloud workload locked into a particular cloud provider.Cloud vendor lock-in can bea major barrier to the adoption of cloud computing and CIO's show some concern with vendor lock-in perFlexera's2020 CIO Priorities Report,To avoid cloud lock-in, create application resiliency,and get back some of the freedoms of cloud consumption - movingworkload from cloud to cloud - organizations need to be able to dynamically movecloud providers quickly and easilyin the unlikely event that one cloud provider becomes unavailable. Workload Portability - How to Seamlessly Move Workloads from Cloud to Cloud (~4min) Challenge #5: Consistent Security Policiesacross clouds How do you ensure that every security policy you require is applied and enforced consistentlyacross the entire fleet of endpoints? According to F5 2020 State of Application Services Report, 59% of respondents said thatapplying consistent security policies across all company applications was one of their biggestchallenges in multi-cloud security. This demo shows how to apply consistent security policies(WAF) across a fleet of cloud workloads deployed at the edge. This helps reduce risk, increasecompliance, and helps maintaineffective governance. How to Apply Consistent Security Policies Across Clouds (~5min) Challenge #6: Complexities of multiple cloud networking and integration with AWS transit gateway – management of security controls. A multi-cloud strategy introducescomplexities aroundnetworking and security control between clouds and within clouds. Within one cloud (e.g., AWS VPC), an organization may use the AWS transit gateway (TGW) to stitch together the Inter-VPC communication. Managingmultiple VPCs attached to a TGW is, by itself, a challenge in managing security control between VPC. In this demo, we show a simple way to leverage the F5 Distributed Cloud integration with AWS TGW to manage security policy across VPCs(also known as East-West traffic). This demo also demonstrates connecting an AWS VPC with other cloud providers such as Azure, GCP, or an on-prem cloud solution in order to unify theconnectivity and reachability of your workload. Multi-Cloud Integration with AWS Transit Gateway (~19min)1.1KViews1like0CommentsWhat is the Edge?
Where oh where to begin? "The Edge" excitement today is reminiscent of "The Cloud" of many moons ago. Everyone, I mean EVERYONE, had a "to the cloud" product to advertise. CS Lewis (The Chronicles of Narnia) wrote an essay titled "The Death of Words" where he bemoaned the decay of words that transitioned from precise meanings to something far more vague. One example he used was gentleman, which had a clear objective meaning (a male above the station of yeoman whose family possessed a coat of arms) but had decayed (and is to this day) to a subjective state of referring to someone well-mannered. This is the case with industry shifts like cloud and edge, and totally works to the advantage of marketing/advertising. The result, however, is usually confusion. In this article, I'll briefly break down the edge in layman's terms, then link out to the additional reading you should do to familiarize yourself with the edge, why it's hot, and how F5 can help with your plans. What is edge computing? The edge, plainly, is all about distribution, taking services once available only in private datacenters and public clouds and shifting them out closer to where the requests are, whether those requests are coming from humans or machines. This shift of services is comprehensive, so while technologies from the infancy of the edge like CDNs are still in play, the new frontier of compute, security, apps, storage, etc, enhances the user experience and broadens the scope of real-time possibilities. CDNs were all about distributing content. The modern edge is all about application and data distribution. Where is the edge, though? But, you say, how is that not the cloud? Good question. Edge computing builds on the technology developed in the cloud era, where de-centralized compute and storage architectures were honed. But the clouds are still regional datacenters. A good example to bring clarity might be an industrial farm. Historically, data from these locations would be sent to a centralized datacenter or cloud for processing, and depending on the workloads, tractors or combines might be idle (or worse: errant) while waiting for feedback. With edge computing, a local node (consider this an enterprise edge) would be gathering all that data, processing, analyzing, and responding in real-time to the equipment, and then sending up to the datacenter/cloud anything relevant for further processing or reporting. Another example would be self-driving car or gaming technology, where perhaps the heavy compute for these is at the telco edge instead of having to backhaul all of it to a centralized processing hub. Where is the edge? Here, there, and everywhere. The edge, conceptually, can be at any point in between the user (be it human, animal, or machine) and the datacenter/cloud. Physically, though, understand that just like "serverless" applications still have to run on an actual server somewhere, edge technology isn't magic, it has to be hosted somewhere as well. The point is that host knows no borders; it can be in a provider, a telco, an enterprise, or even in your own home (see Lori's "Find My Cat" use case). The edge is coming for you The stats I've seen from Gartner and others are pretty shocking. 76% already have plans to deploy at the edge, and 75% of data will be processed at the edge by 2025? I'm no math major, but that sounds like one plus two, carry the three, uh, tomorrow! Are you ready for this? The good news is we are here to help. The best leaps forward in anything in our industry have always come from efforts bringing simplicity to the complexities. Abstraction is the key. Think of the progression of computer languages and how languages like C abstract the needs in Assembler, or how dynamically typed languages like python even abstract away the need for types. Or how hypervisors abstract lower level resources and allow you to carve out compute. Whether you're a netops persona thankful for tools that abstract BGP configurations from the differing syntax of various routers, or a developer thankful for libraries that abstract the nuances of different DNS providers so you can generate your SSL certificates with Let's Encrypt, all of that is abstraction. I like to know what's been abstracted. That's practical at times, but not often. Maybe in academia. Frankly, the cost associated to knowing "all the things" ins't one for which most orgs will pay. Volterra delivers that abstraction, to the compute stack and the infrastructure connective tissue, in spades, thus removing the tenuous manual stitching required to connect and secure your edge services. General Edge Resources Extending Adaptive Applications to the Edge Edge 2.0 Manifesto: Redefining Edge Computing Living on the Edge: How we got here Increasing Diversity of Location and Users is Driving Business to the Edge Application Edge Integration: A Study in Evolution The role of cloud in edge-native applications Edge Design & Data | The Edgevana Podcast (Youtube) Volterra Specific Resources Volterra and Power of the Distributed Cloud (Youtube) Multi-Cloud Networking with Volterra (Youtube) Network Edge App: Self-Service Demo (Youtube) Volterra.io Videos477Views4likes0CommentsThe (hopefully) definitive guide to load balancing Lync Edge Servers with a Hardware Load Balancer
Having worked on a few large Lync deployments recently, I have realized that there is still a lot of confusion around properly architecting the network for load balancing Lync Edge Servers. Guidance on this subject has changed from OCS 2007 to OCS 2007 R2 and now to Lync Server 2010, and it's important that care is taken while planning the design. It's also important to know that although a certain architecture may seem to work, it could be very far from best practice. I'll explain what I mean by that below. The main purpose of Edge Services is to allow remote (whether they are corporate, anonymous, federated, etc) users to communicate with other external/internal users and vice versa. If you're looking to extend your Lync deployment to support communication with federated partners, public IM services, remote users and such, then you'll want to make sure you deploy your Edge Servers properly. This post will discuss some requirements and best practices for deploying Edge Servers, and then we'll go into some suggested architectures. For this discussion, let's assume that there are 3 device types within your DMZ; your firewall, your BIG-IP LTM, and your Lync Edge Server farm. Requirement 1: Your Edge Servers need at least 2 network interfaces; one or more dedicated to the external network, and one dedicated to the internal. The external and internal interfaces need to be on separate IP networks. The Edge Server will host 3 separate external services; Access, Web Conferencing, and Audio/Visual (A/V). If you plan on exposing all 3 services for remote users, you have a choice of using one IP for all 3 services on each server and differentiate them by TCP/UDP port value, or go with a separate IP for each service and use standard ports. Best Practice: This is more preference than best practice, but I like to use 3 separate IPs for these services. With alternative ports/port mapping, you can consolidate to a single IP, but unless you have a very specific reason for doing so, its best to stick with 3 separate IPs. You do burn more IPs by doing this, but you'll have to use non-standard ports for certain services if you use a single IP, and this could lead to issues with certain network devices that like certain traffic types on certain ports. Plus, troubleshooting, traffic statistics, logging are all cleaner if you are using 3 separate IPs. Requirement 2: Traffic that is load balanced to the Lync Edge servers needs to return through the load balancer. In other words, if the hardware load balancer sends traffic to an Edge Server, the return traffic from that Edge Server needs to flow back through the load balancer. There are 2 common ways to ensure that return traffic flows through the load balancer. You can… Use routing, and have the Edge Servers point to the load balancer as their default gateway. Enable SNAT on the load balancer, which rewrites the source IP of the connection to a local network address as the traffic passes through the load balancer. In this case, the Edge Servers will believe that a local client generated the connection and send the responses back to that local address. So there are your two options, which I will refer to as Routing and SNATting. With Routing, your Edge Server will rely on its routing table to route the return traffic out through the load balancer. No obscuring of the source IP address will happen on the load balancer, but you will have to make sure your default gateway & routing tables are correct. With SNATting, you can ensure return traffic goes back through the load balancer and not have to worry about the routing table to take care of this. The drawback to SNATting is that the load balancer will obscure the source IP of the packet as it passes through the load balancer. I will explain below why the SNAT idea is less than ideal, primarily for A/V traffic. Best Practice: You can SNAT traffic to the Web Conferencing and Access services on the Edge Server, but do not SNAT traffic to the A/V Edge Services. By obscuring the client's IP Address when using SNAT, you limit the ability for the A/V Services to connect clients directly to each other, and this is important when clients try to set up peer 2 peer communication, such as a phone call. When using SNAT, The A/V services will not see the client's true IP, so the likelihood of the Edge Server being able to orchestrate the 2 clients to communicate directly with each other is reduced to nil. You'll force the A/V services to utilize its fallback method, in which the P2P traffic will actually have to use the A/V server as a proxy between the 2 clients. Now this 'proxy' fallback mode will still happen from time to time even when your not SNATting at the BIG-IP (for example, multiparty calls will always use 'proxy'), but when you can, its best to minimize the times that users have to leverage this fallback method. So even though SNATting connections to the A/V Edge Service will seem to work, it is far from desirable from a network perspective! FYI - Every load balanced service in a Lync Environment (including Lync FE's, Directors, etc) can be SNAT'ed except for the A/V Edge Service. Requirement 3: Certain connections will need to be load balanced to the Edge Services, while certain connections will need to be made directly to those Edge Services. Best Practice: Make sure clients can connect to the Virtual IP(s) that are load balancing the Edge Services, as well as make sure that clients can connect directly to the Edge Servers themselves. Typically users will hit the load balancer on their first incoming connection and get load balanced, but if a user gets invited to a media session that has started on an Edge Server, the invite they receive will point them directly to that server. NAT awareness was built into Lync 2010 to help in environments in which Edge Servers are deployed behind NATs. By enabling the NAT awareness, Edge Servers will refer clients to their respective NAT address in order to route the users in correctly. Do I need to use routable IPs on the external interface of my Edge Servers? Microsoft says you do, and I would recommend doing so if you can. I have worked on deployments where non-routable IPs are being used (leveraging NATs to allow direct access) and not run into any issues. Just be sure that the Edge Servers are aware of their NAT address. Best Practice: Suggested Deployment "DNAT in, SNAT out" on the Load Balancer ”DSNAT in, SNAT out” was derived from discussions with a certain MSFT engineer who helped me build this guidance. I’d love to give him credit (he knows Lync networking better than anyone I have ever talked to!!), but if named this person, his/her phone would never stop ringing for architecture guidance !!. Back to the subject, if you keep to "DNAT in, and SNAT out” for external-side Lync Edge traffic, your deployment will work! It sums it up very well! So you're ready to architect your Edge Server Deployment. Lets take all the information from above and build a deployment. Keep these things mind….. External Side of the Edge Servers -Plan for VIPs on your BIG-IP to load balance the 3 external services that your Edge Server Provides (Access, WebConferencing, A/V) -Plan for direct (non-load balanced) access to your Edge Servers by external clients -Plan a method to allow Edge Servers to make outbound connections (forwarding VIP or SNAT on BIG-IP) -Point the Edge Server's Default Gateway to the Self IP of the BIG-IP -Point the BIG-IP's Default Gateway to the Router -Do not SNAT traffic to the A/V Services on the Edge Servers If you use non-routable IPs on the external Interfaces of the Edge Servers, create a NAT on the BIG-IP for each Edge Server. Make sure the Edge Servers are aware of these NAT addresses so they can hand them out to clients who need to connect directly to Edge Server. Internal Side of the Edge Servers -Plan for VIPs on your BIG-IP to load balance ports 443, 3478, 5061, and 5062 on the internal interfaces of your Edge Servers -Plan for direct (non-load balanced) access to your Edge Servers -Make sure your Edge Servers have routes to the internal network(s) -You can SNAT traffic to the internal interface of the Edge Servers I'll leave you with an example of a fully supported configuration (i.e. using routable IP Addresses all around). Keep in mind, this is not the only way to architect this, but if you have the available public IP address space, this will work. Wow… so much for a short post. I welcome any and all feedback, and I promise to update this post with new information as it comes in. I'll also augment this post with more details & deployments as I find time to write them up, so check back for updates. This may even end up as a guide some day! Version 1.0 date 7/14/2011 Version 1.1 date 2/15/2011 - Fixed a few typos. Fixed some heinous formatting1.3KViews0likes8CommentsAdvanced Edge Client Installation for Windows–The Mysteries of Windows Installer Revealed
In many small to medium sized companies (and even some fairly large ones), when you need a new piece of software installed on your desktop, you call the IT guy, or put in a trouble ticket, and the IT guy shows up with a DVD or CD, and installs your software. If you’re lucky, he remote controls your PC, and uses a shared network drive to install software to your PC. This works fine for an office of 10 – 50 people. But let’s say you’re a large multinational corporation with 50,000 or 100,000 desktops. To roll out software to that number of desktops and laptops becomes a very costly investment if you are doing it manually. You’d need to clone Billy the desktop tech a whole lot of times to get enterprise-wide rollouts done in a timely fashion. Microsoft thought this too, and back in 1999-2000, they partnered with a company called Veritas who had a product called WinInstall. They came up with a standard for deploying software that, with some revisions, is still in use today. They called it the Microsoft Installer, later it became known as Windows Installer. We call them MSI files generally, from their extension, .msi. You can think of the file as a flat-file database, very much like an MS Access file. There are a number of tables in it that contain the actual compressed files that are deployed to your PC, scripts that are used in the run time installation, and properties that are presented to the user in a GUI install. In addition to all this goodness, the MSI files actually do health checks on the application by using key files and components (think files, registry settings, COM objects, stuff like that), and if the installation becomes damaged somehow either by the user or some other process overwriting a key piece, upon launch of the advertised shortcut, it will actually self-heal the application. Conversely, if you need to remove a particular piece of software from the environment, Windows Installer allows for the orderly clean up and removal of applications using the same database file. The neat thing is that if you have a little bit of know-how as to how it all works together, you can roll out the same software application to 50,000 PCs, and each install will have the same characteristics as all the others, because you set the parameters for the install. If you want to you can install the applications in the background, right under the user’s noses, and all the while, they’re none the wiser, you can do that to with some command line switches. It’s one of those jobs in IT where if no one knows anything is going on, you’re doing your job right. Or to steal a line from The Wizard of Oz, “Pay no attention to the man behind the curtain.” So, how does this affect me? Well, most of us reading this are network folks, and don’t have a lot to do with enterprise desktop management. However, you may have a large APM Install base at your company, and you need to know how to roll out the Edge Client to all your user’s laptops, via Active Directory, SCCM, or some other enterprise desktop management platform. Or, maybe the enterprise desktop folks need a little guidance on how to roll out the Edge Client. Hopefully, this article will clear all that up. So, let’s start by looking at the BIG-IP Edge Client. When you download the BIG-IP Edge Client to your desktop, you notice something right away. The icon for the BIGIPEdgeClient.exe doesn’t look like a normal application (.exe) file icon. It looks like this – What that tells me is that this is no ordinary executable, but one that is a wrapper to call a Windows Installer File, that is very commonly used in applications that are put together using InstallShield. If you use a tool like 7-Zip or WinRAR, you can extract the contents of this file, into a folder. Here, we can see the contents – All of these components are necessary for the installation of the Edge client. If you are doing a large scale rollout of BIG-IP Edge Client, this is what you use as the source directory for your install. In the case of the Edge Client, the MSI file is a database, and instead of using the MSI file to store the file components, the MSI file uses the F5 VPN folder to place the contents of it on the end user’s PC. In order to customize and modify the install, we need to learn a few things. First, understand that the MSI file has a very important table within it that governs how it is installed. However, you don’t want to edit it directly. It can be done, but tinkering with a vendor’s MSI file can make it behave erratically, and makes for a support nightmare. In light of this, Microsoft has provided us a couple of ways to deal with this, and we are going to look at both. The table in the MSI file that governs how it is installed is the Property table. Microsoft has provided us a tool called Orca to not only examine the MSI, but also to create transforms. You can download Orca here - http://www.microsoft.com/en-us/download/details.aspx?id=3138 Orca is a very, very small part of the SDK, and can be found here - C:\Program Files\Microsoft SDKs\Windows\v7.0\Bin\Orca.msi, after the SDK is installed. Personally, I saved the orca.msi elsewhere, and uninstalled the rest of the SDK, because Orca is really all I needed. I then installed the Orca Windows Installer file by double clicking on it, and I have the tool I need. Then, when I browse back to the expanded archive, if I right click on the f5fpclients.msi file, I have a new option – Edit with Orca, as shown below - Choose Edit with Orca, and you can then explore inner workings of the MSI. This is a listing of all the tables within the MSI database file. In here, you can add, delete, modify anything in the MSI that you would need. You can store new custom procedures to kick off other processes, register the software, just about anything you want. Choose the Property table. This is where all the neat stuff is kept, and where you can change the way the application installs. Notice some properties are written in all caps, and others are in upper and lower case. The ones in all caps are public properties. The public properties in the Edge Client MSI are – ARPNOMODIFY – default is TRUE - the user has the option to modify the program from the Add/ Remove Programs Interface ARPNOREPAIR – default is TRUE - the user has the option to repair the program from the Add/ Remove Programs Interface INSTALLLEVEL – default is 100 - the initial level at which features are selected "ON" for installation by default. ARPHELPLINK – default is http://askf5.com - Add/ Remove Programs Interface displays this as a help link for the program PROMPTROLLBACKCOST – default is P - the action the installer takes if rollback installation capabilities are enabled and there is insufficient disk space to complete the installation. P = prompt the user with a dialog asking to rollback or not. ALLUSERS – default is 1 - configures the installation context of the package. 1 is a per-machine install, 0 is a per user install. If the user lacks rights to do a per-machine install, if this property is set to 1, it will fall back to a per user install. ARPPRODUCTICON – default is icon.ico - Specifies the icon used in the Add/ Remove Programs Interface. STARTAPPWITHWINDOWS – default is 0 - The Edge Client has to be started manually by the user. If set to 1, Edge client loads with windows. One interesting thing to note about public properties is that they can be set at package load through the command line interface of the msiexec installer. This is useful if you only want to change public properties, you can set your command line execution of your enterprise desktop management system to set these properties while installing the F5 supplied MSI. There are some examples on how to do this at the end of the article. The properties with upper and lower case letters are private properties. These are properties that are protected and therefore cannot be set at the command line at runtime. Some of the more Interesting ones in the Edge client MSI are – ProductCode – this is a GUID used to identify this particular MSI. This is very helpful when uninstalling a program from your PC’s. running msiexec.exe /x {GUID} will uninstall the application from the desktops. For example, if you had a bunch of older Edge Clients you wanted off the PC’s before you rolled out the new version, you could get the product code from the old MSI, and run msiexec with the /x option to do the cleanup. IAgree – this is the checkbox that you click to accept the End User License Agreement (EULA), No is unchecked, Yes is checked InstallMode – Tells the installer how to install the application. Default is Typical, which includes all the usual options. Some other choices are Custom or Complete. Since these are private properties, the only means by which we modify them is by a transform file that is saved and included in the source directory, and called when installing the MSI. Also, you can modify public properties via a transform, just be aware that the public properties set at the command line generally take precedence. Let’s say we want to modify our Edge Client MSI to start with Windows, and we want to make this happen via a transform. Since we have Orca open, click the Transform tab, and choose New Transform. Then, make the changes you need. In our case, we go down to the STARTAPPWITHWINDOWS property, and change that value to 1. Then go back to the Transform tab, and choose Generate Transform. We can then save our new transform file to be used later. In our case, we save it as MyTransform.mst Congratulations. You have just created your first transform. We can then call this from the command line using the following example – msiexec.exe /i f5fpclients.msi TRANSFORMS=MyTransform.mst /qn! This installs the f5fpclients MSI file with the MyTransform.mst transform quietly, with no option to cancel. To set the STARTAPPWITHWINDOWS public property in the command line, we could use this example – msiexec.exe /i f5fpclients.msi STARTAPPWITHWINDOWS=”1” /qn! This installs the f5fpclients MSI file with the STARTAPPWITHWINDOWS property set to 1, quietly with no option to cancel. Here are some of the more common MSIEXEC command line options you might find useful – Install Options /i - normal installation /a - administrative install /x - uninstall the package User Display Options /quiet - quiet mode (there is no user interaction) /passive - unattended mode (the installation shows only a progress bar) /q - set the UI level: n - no UI b - basic UI r - reduced UI f - full UI Restart Options /norestart - the machine will not be restarted after the installation is complete /promptrestart - the user will be prompted if a reboot is required /forcerestart - the machine will be restarted after the installation is complete If you want to learn more about MSI files and how to modify them, here are a few links to get you started – MSIEXEC command line - http://technet.microsoft.com/en-us/library/cc759262%28WS.10%29.aspx ItNinja (formerly AppDeploy) - http://www.itninja.com/ Large blogsite, lots of MSI resources InstallSite – http://www.installsite.org Discussion Community and Windows Installer Info1.7KViews0likes2CommentsBIG-IP Edge Client 2.0.2 for Android
Earlier this week F5 released our BIG-IP Edge Client for Android with support for the new Amazon Kindle Fire HD. You can grab it off Amazon instantly for your Android device. By supporting BIG-IP Edge Client on Kindle Fire products, F5 is helping businesses secure personal devices connecting to the corporate network, and helping end users be more productive so it’s perfect for BYOD deployments. The BIG-IP® Edge Client™ for all Android 4.x (Ice Cream Sandwich) or later devices secures and accelerates mobile device access to enterprise networks and applications using SSL VPN and optimization technologies. Access is provided as part of an enterprise deployment of F5 BIG-IP® Access Policy Manager™, Edge Gateway™, or FirePass™ SSL-VPN solutions. BIG-IP® Edge Client™ for all Android 4.x (Ice Cream Sandwich) Devices Features: Provides accelerated mobile access when used with F5 BIG-IP® Edge Gateway Automatically roams between networks to stay connected on the go Full Layer 3 network access to all your enterprise applications and files Supports multi-factor authentication with client certificate You can use a custom URL scheme to create Edge Client configurations, start and stop Edge Client BEFORE YOU DOWNLOAD OR USE THIS APPLICATION YOU MUST AGREE TO THE EULA HERE: http://www.f5.com/apps/android-help-portal/eula.html BEFORE YOU CONTACT F5 SUPPORT, PLEASE SEE: http://support.f5.com/kb/en-us/solutions/public/2000/600/sol2633.html If you have an iOS device, you can get the F5 BIG-IP Edge Client for Apple iOS which supports the iPhone, iPad and iPod Touch. We are also working on a Windows 8 client which will be ready for the Win8 general availability. ps Resources F5 BIG-IP Edge Client Samsung F5 BIG-IP Edge Client Rooted F5 BIG-IP Edge Client F5 BIG-IP Edge Portal for Apple iOS F5 BIG-IP Edge Client for Apple iOS F5 BIG-IP Edge apps for Android Securing iPhone and iPad Access to Corporate Web Applications – F5 Technical Brief Audio Tech Brief - Secure iPhone Access to Corporate Web Applications iDo Declare: iPhone with BIG-IP Technorati Tags: F5, infrastructure 2.0, integration, cloud connect, Pete Silva, security, business, education,technology, application delivery, ipad, cloud, context-aware,infrastructure 2.0, iPhone, web, internet, security,hardware, audio, whitepaper, apple, iTunes2.6KViews0likes3CommentsBIG-IP Edge Client v1.0.4 for iOS
If you are running the BIG-IP Edge Client on your iPhone, iPod or iPad, you may have gotten an AppStore alert for an update. If not, I just wanted to let you know that version 1.0.4 of the iOS Edge Client is available at the AppStore. The main updates in v1.0.4: IPv6 Support Localization New iPad Retina Graphics The BIG-IP Edge Client application from F5 Networks secures and accelerates mobile device access to enterprise networks and applications using SSL VPN and optimization technologies. Access is provided as part of an enterprise deployment of F5 BIG-IP Access Policy Manager, Edge Gateway, or FirePass SSL-VPN solutions. BIG-IP Edge Client for iOS Features: Provides accelerated mobile access when used with F5 BIG-IP Edge Gateway. Automatically roams between networks to stay connected on the go. Full Layer 3 network access to all your enterprise applications and files. I updated mine today without a problem. ps495Views0likes0CommentsNew iOS Edge Client
If you are running the BIG-IP Edge Client on your iPhone, iPod or iPad, you may have gotten an AppStore alert for an update. If not, I just wanted to let you know that version 1.0.3 of the iOS Edge Client is available at the AppStore. The main updates in v1.0.3: URI scheme enhancement allows passing configuration data to the client upon access. For example, you could have a link on the WebTop that invokes the client and forces web logon mode. Other Bug fixes. The BIG-IP Edge Client application from F5 Networks secures and accelerates mobile device access to enterprise networks and applications using SSL VPN and optimization technologies. Access is provided as part of an enterprise deployment of F5 BIG-IP Access Policy Manager, Edge Gateway, or FirePass SSL-VPN solutions. BIG-IP Edge Client for iOS Features: Provides accelerated mobile access when used with F5 BIG-IP Edge Gateway. Automatically roams between networks to stay connected on the go. Full Layer 3 network access to all your enterprise applications and files. I loaded it yesterday on my devices without a hitch. ps Related: iDo Declare: iPhone with BIG-IP F5 Announces Two BIG-IP Apps Now Available at the App Store F5 BIG-IP Edge Client App F5 BIG-IP Edge Portal App F5 BIG-IP Edge Client Users Guide iTunes App Store Securing iPhone and iPad Access to Corporate Web Applications – F5 Technical Brief Audio Tech Brief - Secure iPhone Access to Corporate Web Applications Technorati Tags: F5, infrastructure 2.0, integration, cloud connect, Pete Silva, security, business, education, technology, application delivery, ipad, cloud, context-aware, infrastructure 2.0, iPhone, web, internet, security, hardware, audio, whitepaper, apple, iTunes261Views0likes0CommentsBlitzkrieg and VDI Edge Protection.
By now, everyone even vaguely familiar with information security knows the military maxim of blitzkrieg – burst through the hardened defense at a single point and then rush pell-mell to the rear where the soft underbelly of any static army lies. It is a good military strategy, provided you have the resources to break through the defenses and follow up with a rapid advance into the rear areas. While there are variants of this plan, and a lot of discussion about how/when it is strategically worth the risk, historically speaking it has been a smashing success. Germany did it to France and the Low Countries in 1940, to Russia in 1941, Russia returned the favor in 1943, and the western allies joined used it successfully at Normandy in late 1944. Sherman’s March to the Sea in the American Civil War was just such a ploy (though Sherman was more willing to hit civilian targets than a 20th century general would have been, it was still a rush to the soft rear), and the first Gulf War had the coalition forces doing much the same. These are just the large-scale instances of this theory in operation, but you have to admit it works. The risk is high though, as the Germans found out at Prokhorovka, and that alone makes generals cautious that they have the resources and intelligence reports to burst through in the first place. The difference between the military maxim and the theory that information security should follow it is an important one. In military theory, you only harden behind the lines if there is a high likelihood that the enemy forces will find a weak spot in your lines and exploit it to get at the rear areas. The conundrum for the defensive leader finding themselves in such a situation is that every combat soldier placed to the rear is one less combat soldier on the front, increasing the likelihood that there will be a breakthrough. In information security, the problem is that the resources of the attacker are theoretically unlimited. Unless they are apprehended by the authorities in their home country, there is no penalty for attacking over and over and over. The limiting factor for the attacker – that they might smash themselves upon their opponent – does not exist at this time in Internet parlance. An attack fails, that merely means the attacker marshals the same exact set of resources and tries again. The defense, on the other hand, still has a limited number of resources (dollars and staff hours) to defend themselves with. And they must make the most of them. Defense in depth is an absolute necessity, simply because the attacker can continue ad-infinitum to try attacking, and the number of attackers is unknown but large. That leaves a heavy burden on information security staff, who have settled into the glum belief that it is “not if, but when” they will be defeated. While the ultimate solution to this problem rests outside the purview of corporate security, in the interim, it is necessary to do what can be done to simplify and strengthen the fortifications that are between ne’er do wells and corporate resources. Just to add fuel to the fire, this is all happening at the same time that organizations are facing increasing pressure to expose more and more of their internal architecture to the Internet so that users can access their applications from essentially anywhere. So to put it into military terms, there are numerous hostile entities, an ever increasing front length, and a static number of defenders and resources. That is not a recipe for success in most scenarios. So what is the serious information security professional to do? Well the first steps have already been taken. Defense in depth is just a fact that most organizations live with, down to firewalls between departments for some organizations. Anti-virus tools and encryption are the norm, not the exception, and external access is generally protected by a VPN. But new technologies bring new challenges, or more frequently make old but low likelihood challenges into higher priority issues. As we deploy VDI – and we are deploying VDI at a faster rate than I’d expected – the issue of edge security becomes more and more of an issue. If you expose VDI desktops to the world so that your workers can log in at any hour and get some work done, or an employee who’s sick can stay home to avoid infecting others but is well enough to work can do so, you will have to find a way to lock that interface to the world down so that users can get in, but hackers cannot. This is more important than most interfaces because the interface sits in front of user desktops, and they generally have more access than a server. While there are a variety of ways to attack such an inlet, DDoS – to keep employees from working remotely – and Trojans are the two most likely to be successful. What you’ll want on this inlet is a way to check that the client – be it PC or iPad or whatever – complies with security policy that includes at least rudimentary virus checking (since the client device is outside your network and possibly not even a corporate resource), and a way to resist DDoS attacks. A network level tool that shunts detected DDoS attacks off to neverland, like F5’s own BIG-IP is going to be the best solution, since traditional firewalls are aimed at detecting more traditional attacks and can become victims of a DDoS. Regardless of what you choose to protect against this type of attack, it should be something you can guarantee will stay standing when hit with thousands of dropped connections a second. And you’ll want to be able to apply more generally corporate security policies. That’s a tough call in a VDI environment. While a product like BIG-IP can be set up to use your corporate security policies for access and authentication purposes, it is difficult – both legally and technologically - to force corporate security policy on employee-owned devices. Legally you can limit access based upon the status of the machine requesting it, the user name, and the geographic location, but you can’t insure that the device meets with the same stringent policies you would require on your internal network. And that’s a problem, because VDI is your internal network. Time will tell how large this threat looms, but I wouldn’t ignore it, since we know it’s a threat. Legally you can ask employees to agree to be bound by corporate security policy when accessing the corporate network from a home machine, but I honestly don’t know of anyone doing that today – and I am not a lawyer, so maybe there’s a good legal reason I haven’t heard of anyone doing just that. In the end, the benefits of allowing some or all users to access their desktop remotely is a huge benefit, but be careful out there, the number of attackers isn’t going down, and while we’re working all of this out is their opportunity to take advantage of weaknesses. So protect yourself. I’d recommend F5 products, but there are other ways to try and resist the hoards should they come knocking at your public VDI interface. Whatever you choose, just make certain it is implemented well.253Views0likes0Comments