netapp
12 TopicsSecure AI RAG using F5 Distributed Cloud in Red Hat OpenShift AI and NetApp ONTAP Environment
Introduction Retrieval Augmented Generation (RAG) is a powerful technique that allows Large Language Models (LLMs) to access information beyond their training data. The “R” in RAG refers to the data retrieval process, where the system retrieves relevant information from an external knowledge base based on the input query. Next, the “A” in RAG represents the augmentation of context enrichment, as the system combines the retrieved relevant information and the input query to create a more comprehensive prompt for the LLM. Lastly, the “G” in RAG stands for response generation, where the LLM generates a response with a more contextually accurate output based on the augmented prompt as a result. RAG is becoming increasingly popular in enterprise AI applications due to its ability to provide more accurate and contextually relevant responses to a wide range of queries. However, deploying RAG can introduce complexity due to its components being located in different environments. For instance, the datastore or corpus, which is a collection of data, is typically on-premise for enhanced control over data access and management due to data security, governance, and compliance with regulations within the enterprise. Meanwhile, inference services are often deployed in the cloud for their scalability and cost-effectiveness. In this article, we will discuss how F5 Distributed Cloud can simplify the complexity and securely connect all RAG components seamlessly for enterprise RAG-enabled AI applications deployments. Specifically, we will focus on Network Connect, App Connect, and Web App & API Protection. We will demonstrate how these F5 Distributed Cloud features can be leveraged to secure RAG in collaboration with Red Hat OpenShift AI and NetApp ONTAP. Example Topology F5 Distributed Cloud Network Connect F5 Distributed Cloud Network Connect enables seamless and secure network connectivity across hybrid and multicloud environments. By deploying F5 Distributed Cloud Customer Edge (CE) at site, it allows us to easily establish encrypted site-to-site connectivity across on-premises, multi-cloud, and edge environment. Jensen Huang, CEO of NVIDIA, has said that "Nearly half of the files in the world are stored on-prem on NetApp.”. In our example, enterprise data stores are deployed on NetApp ONTAP in a data center in Seattle managed by organization B (Segment-B: s-gorman-production-segment), while RAG services, including embedding Large Language Model (LLM) and vector database, is deployed on-premise on a Red Hat OpenShift cluster in a data center in California managed by Organization A (Segment-A: jy-ocp). By leveraging F5 Distributed Cloud Network Connect, we can quickly and easily establish a secure connection for seamless and efficient data transfer from the enterprise data stores to RAG services between these two segments only: F5 Distributed Cloud CE can be deployed as a virtual machine (VM) or as a pod on a Red Hat OpenShift cluster. In California, we deploy the CE as a VM using Red Hat OpenShift Virtualization — click here to find out more on Deploying F5 Distributed Cloud Customer Edge in Red Hat OpenShift Virtualization: Segment-A: jy-ocp on CE in California and Segment-B: s-gorman-production-segment on CE in Seattle: Simply and securely connect Segment-A: jy-ocp and Segment-B: s-gorman-production-segment only, using Segment Connector: NetApp ONTAP in Seattle has a LUN named “tbd-RAG”, which serves as the enterprise data store in our demo setup and contains a collection of data. After these two data centers are connected using F5 XC Network Connect, a secure encrypted end-to-end connection is established between them. In our example, “test-ai-tbd” is in the data center in California where it hosts the RAG services, including embedding Large Language Model (LLM) and vector database, and it can now successfully connect to the enterprise data stores on NetApp ONTAP in the data center in Seattle: F5 Distributed Cloud App Connect F5 Distributed Cloud App Connect securely connects and delivers distributed applications and services across hybrid and multicloud environments. By utilizing F5 Distributed Cloud App Connect, we can direct the inference traffic through F5 Distributed Cloud's security layers to safeguard our inference endpoints. Red Hat OpenShift on Amazon Web Services (ROSA) is a fully managed service that allows users to develop, run, and scale applications in a native AWS environment. We can host our inference service on ROSA so that we can leverage the scalability, cost-effectiveness, and numerous benefits of AWS’s managed infrastructure services. For instance, we can host our inference service on ROSA by deploying Ollama with multiple AI/ML models: Or, we can enable Model Serving on Red Hat OpenShift AI (RHOAI). Red Hat OpenShift AI (RHOAI) is a flexible and scalable AI/ML platform builds on the capabilities of Red Hat OpenShift that facilitates collaboration among data scientists, engineers, and app developers. This platform allows them to serve, build, train, deploy, test, and monitor AI/ML models and applications either on-premise or in the cloud, fostering efficient innovation within organizations. In our example, we use Red Hat OpenShift AI (RHOAI) Model Serving on ROSA for our inference service: Once inference service is deployed on ROSA, we can utilize F5 Distributed Cloud to secure our inference endpoint by steering the inference traffic through F5 Distributed Cloud's security layers, which offers an extensive suite of features designed specifically for the security of modern AI/ML inference endpoints. This setup would allow us to scrutinize requests, implement policies for detected threats, and protect sensitive datasets before they reach the inferencing service hosted within ROSA. In our example, we setup a F5 Distributed Cloud HTTP Load Balancer (rhoai-llm-serving.f5-demo.com), and we advertise it to the CE in the datacenter in California only: We now reach our Red Hat OpenShift AI (RHOAI) inference endpoint through F5 Distributed Cloud: F5 Distributed Cloud Web App & API Protection F5 Distributed Cloud Web App & API Protection provides comprehensive sets of security features, and uniform observability and policy enforcement to protect apps and APIs across hybrid and multicloud environments. We utilize F5 Distributed Cloud App Connect to steer the inference traffic through F5 Distributed Cloud to secure our inference endpoint. In our example, we protect our Red Hat OpenShift AI (RHOAI) inference endpoint by rate-limiting the access, so that we can ensure no single client would exhaust the inference service: A "Too Many Requests" is received in the response when a single client repeatedly requests access to the inference service at a rate higher than the configured threshold: This is just one of the many security features to protect our inference service. Click here to find out more on Securing Model Serving in Red Hat OpenShift AI (on ROSA) with F5 Distributed Cloud API Security. Demonstration In a real-world scenario, the front-end application could be hosted on the cloud, or hosted at the edge, or served through F5 Distributed Cloud, offering flexible alternatives for efficient application delivery based on user preferences and specific needs. To illustrate how all the discussed components work seamlessly together, we simplify our example by deploying Open WebUI as the front-end application on the Red Hat OpenShift cluster in the data center in California, which includes RAG services. While a DPU or GPU could be used for improved performance, our setup utilizes a CPU for inferencing tasks. We connect our app to our enterprise data stores deployed on NetApp ONTAP in the data center in Seattle using F5 Distributed Cloud Network Connect, where we have a copy of "Chapter 1. About the Migration Toolkit for Virtualization" from Red Hat. These documents are processed and saved to the Vector DB: Our embedding Large Language Model (LLM) is Sentence-Transformers/all-MiniLM-L6-v2, and here is our RAG template: Instead of connecting to the inference endpoint on Red Hat OpenShift AI (RHOAI) on ROSA directly, we connect to the F5 Distributed Cloud HTTP Load Balancer (rhoai-llm-serving.f5-demo.com) from F5 Distributed Cloud App Connect: Previously, we asked, "What is MTV?“ and we never received a response related to Red Hat Migration Toolkit for Virtualization: Now, let's try asking the same question again with RAG services enabled: We finally received the response we had anticipated. Next, we use F5 Distributed Cloud Web App & API Protection to safeguard our Red Hat OpenShift AI (RHOAI) inference endpoint on ROSA by rate-limiting the access, thus preventing a single client from exhausting the inference service: As expected, we received "Too Many Requests" in the response on our app upon requesting the inference service at a rate greater than the set threshold: With F5 Distributed Cloud's real-time observability and security analytics from the F5 Distributed Console, we can proactively monitor for potential threats. For example, if necessary, we can block a client from accessing the inference service by adding it to the Blocked Clients List: As expected, this specific client is now unable to access the inference service: Summary Deploying and securing RAG for enterprise RAG-enabled AI applications in a multi-vendor, hybrid, and multi-cloud environment can present complex challenges. In collaboration with Red Hat OpenShift AI (RHOAI) and NetApp ONTAP, F5 Distributed Cloud provides an effortless solution that secures RAG components seamlessly for enterprise RAG-enabled AI applications.363Views1like0CommentsF5 Friday: Enhancing FlexPod with F5
#VDI #cloud #virtualization Black-box style infrastructure is good, but often fails to include application delivery components. F5 resolves that issue for NetApp FlexPod The best thing about the application delivery tier (load balancing, acceleration, remote access) is that is spans both networking and application demesnes. The worst thing about the application delivery tier (load balancing, acceleration, remote access) is that is spans both networking and application demesnes. The reality of application delivery is that it stands with one foot firmly in the upper layers of the stack and the other firmly in the lower layers of the stack, which means it’s often left out of infrastructure architectures merely because folks don’t know which box it should go in. Thus, when “black-box” style infrastructure architecture solutions like NetApp’s FlexPod arrive, they often fail to include any component that doesn’t firmly fit in one of three neat little boxes: storage, network, server (compute). FlexPod isn’t the only such offering, and I suspect we’ll continue to see more “architecture in a rack” solutions in the future as partnerships are solidified and solution providers continue to expand their understanding of what’s required to support a dynamic data center. FlexPod is a great example of both an “architecture in a rack” supporting the notion of a dynamic data center and of the reality that application delivery components are rarely included. “FlexPod™, jointly developed by NetApp and Cisco, is a flexible infrastructure platform composed of pre-sized storage, networking, and server components. It’s designed to ease your IT transformation from virtualization to cloud computing with maximum efficiency and minimal risk.” -- NetApp FlexPod Data Sheet NetApp has done a great job of focusing on the core infrastructure but it has also gone the distance and tested FlexPod to ensure compatibility with application deployments across a variety of hypervisors, operating systems and applications, including: VMware® View and vSphere™ Citrix XenDesktop Red Hat Enterprise Linux® (RHEL) Oracle® SAP® Microsoft® Exchange, SQL Server® and SharePoint® Microsoft Private Cloud built on FlexPod What I love about this particular list is that it parallels so nicely the tested and fully validated solutions from F5 for delivering all these solutions. Citrix XenDesktop VMWare View and vSphere Oracle SAP Microsoft® Exchange, SQL Server® and SharePoint® That means that providing a variety of application delivery services for these applications - secure remote access, load balancing, acceleration and optimization – should be a breeze for organizations to implement. It should also be a requirement, at least in terms of load balancing and optimization services. If FlexPod makes it easier to dynamically manage resources supporting these applications then adding an F5 application delivery tier to the mix will ensure those resources and the user experience are optimized. SERVERS should SERVE While FlexPod provides the necessary storage, compute, and layer 2 networking components, critical application deployments are enhanced by F5 BIG-IP solutions for several reasons: Increase Capacity Offloads CPU-intensive processes from virtual servers, freeing up resources and increasing VM density and application capacity Improved Performance Accelerates end-user experience using adaptive compression and connection pooling technologies Enables Transparent and Rapid Scalability Deployment of new virtual server instances hosted in FlexPod can be added to and removed from BIG-IP Local Traffic Manager (LTM) virtual pools to ensure seamless elasticity Enables Automated Disaster Recovery F5 BIG-IP Global Traffic Manager (GTM) provides DNS global server load balancing services to automate disaster recovery or dynamic redirection of user requests based on location. Accelerated Replication Traffic BIG-IP WAN Optimization Manager (WOM) can improve the performance of high latency or packet-loss prone WAN links. NetApp replication technology (SnapMirror) will see substantial benefit when customers add BIG-IP WOM to enhance WAN performance. Bonus: Operational Consistency Because BIG-IP is an application delivery platform, it allows the deployment of a variety of application delivery services on a single, unified platform with a consistent operational view of all application delivery services. That extends to other BIG-IP solutions, such as BIG-IP Access Policy Manager (APM) for providing unified authentication to network and application resources across remote, LAN, and wireless access. Operational consistency is one of the benefits a platform-based approach brings to the table and is increasingly essential to ensuring that the cost-saving benefits of cloud and virtualization are not lost when disparate operational and management systems are foisted upon IT. FlexPod only provides certified components for storage, compute and layer 2 networking. Most enterprise application deployments require application delivery services whether for load balancing or security or optimization and ones that do not still realize significant benefits when deploying such services. Marrying F5 application delivery services with a NetApp FlexPod solution will yield significant benefits in terms of resource utilization, cost reductions, and address critical components of operational risk without introducing additional burdens on already overwhelmed IT staff. Operational Risk Comprises More Than Just Security The Future of Cloud: Infrastructure as a Platform At the Intersection of Cloud and Control… The Pythagorean Theorem of Operational Risk The Epic Failure of Stand-Alone WAN Optimization Mature Security Organizations Align Security with Service Delivery F5 Friday: Doing VDI, Only Better231Views0likes0CommentsF5 Friday: NetApp SnapVault With BIG-IP WOM
Because ‘big data’ isn’t just a problem for data at rest, it’s a problem for data being transferred. Remember when we talked about operational risk comprising more than security? One of the three core components of operational risk is availability which is defined differently based not only the vertical industry you serve but also on the business goals of the application. This includes disaster recovery goals, among which off-site backups are often used as a means to address the availability of data for critical applications in the event of a disaster. Data grows, it rarely shrinks, and operational tasks involving the migration of data – whether incremental or full backups – to secondary and even tertiary sites are critical to the successful “failover” of an organization from one site to another as well as the ability to restore data should something, heaven forbid, happen to the source. These backups are often moved across WAN connections to secondary data centers or off-site services. But the growth of data is not being mirrored by the growth of connectivity throughput and speeds, causing backup windows to grow to unacceptable intervals of time. #GartnerDC Major IT Trend #2 is: 'Big Data - The Elephant in the Room'. Growth 800% over next 5 years - w/80% unstructured. Tiering critical -- @ZimmerHDS (Harry Zimmer) As a means to combat the increasing time required to transfer big data across WAN connections, it becomes necessary to either increase the amount of available bandwidth or decrease the size of the data. The former is an expensive proposition, especially considering the benefits are only seen periodically and the latter is a challenge unto itself. Data growth is not something that can be halted or slowed by operational needs, so it’s necessary to find the means to reduce the size of data in transit, only. That’s where BIG-IP WOM (WAN Optimization Module) comes into play. In conjunction with NetApp SnapVault, BIG-IP WOM can dramatically impact the performance of WAN connections, decreasing the time required for backup operations and ensuring that such operationally critical tasks complete as expected. As is often the case when we’re talking storage or WAN optimization, Don has more details on our latest solution for NetApp SnapVault. Happy Backups! NetApp SnapVault is an optimized disk-to-disk backup system that copies changed blocks from a source file system to a target file system. Since the backup is a copy of the source file system, restore operations are immensely simplified from traditional disk-to-tape backup models. Only blocks that have changed since the last update are copied over-the-wire, making it very efficient at local backups. When the latencies and packet loss of a WAN connection are introduced, however, SnapVault suffers just the same as any other WAN application does, and can get backed up, making meeting your Recovery Point Objectives and Recovery Time Objectives difficult. While SnapVault specializes in keeping a near-replica of your chosen file system, if that chosen file system is remote, it might just need a little help. Enter BIG-IP WAN Optimization Module (WOM), a compression/dedupe/encryption/TCP Optimization add-on for BIG-IP LTM that improves WAN communications, and in many cases makes SnapVault over the WAN perform like SnapVault over the LAN. To achieve this, a BIG-IP WOM device is placed on the WAN links of both the source and target datacenters, then a secure tunnel is created between the two, and data is transferred over the secure tunnel. With Symmetric Adaptive Compression and Symmetric Deduplication, BIG-IP WOM can achieve enormous data transfer reductions while maintaining your data integrity. Add in TCP optimizations to improve the reliability of your WAN link and reduce the overhead of TCP in error-prone environments, and you’ve got a massive performance booster that does plenty for SnapVault. In fact, our testing (detailed in this solution profile) showed a 59x improvement over standard SnapVault installations. BIG-IP WOM also supports Rate Shaping, giving you the ability to assign priority to your SnapVault backups and insure that they receive enough bandwidth to stay up-to-date. You can find out more about SnapVault on the NetApp website, and more about F5 BIG-IP WAN Optimization Module on F5’s website. While the results for SnapVault are astounding, BIG-IP WOM has been tested with a wide array of replication products to give you the widest set of options possible for improving your bandwidth utilization on point to point communications. F5 BIG-IP WOM. Making long-distance SnapVault Secure, Fast, and Available. Enhancing NetApp SnapVault Performance with F5 BIG-IP WOM BIG-IP WAN Optimization Module Performance F5 BIG‑IP WAN Optimization Module in Data Replication Environments Byte Caching, Compression, and WAN Optimization No Really. Broadband. The Golden Age of Data Mobility? Deduplication and Compression – Exactly the same, but different. F5 Friday: BIG-IP WOM With Oracle Products F5 Friday: F5 BIG-IP WOM Puts the Snap(py) in NetApp SnapMirror199Views0likes0CommentsF5 Friday: F5 BIG-IP WOM Puts the Snap(py) in NetApp SnapMirror
Data replication is still an issue for large organizations and as data growth continues, those backup windows are getting longer and longer… With all the hype surrounding cloud computing and dynamic resources on demand for cheap you’d think that secondary and tertiary data centers are a thing of the past. Not so. Large organizations with multiple data centers – even those are evolving out of growth at remote offices – still need to be able to replicate and backup data between corporate owned sites. Such initiatives are often fraught with peril due to the explosive growth in data which, by all accounts, is showing no signs of slowing down any time soon. The reason this is problematic is because the pipes connecting those data centers are not expanding and doing so simply to speed up transfer rates and decrease transfer windows is cost prohibitive. It’s the same story as any type of capacity – expanding to meet periodic bursts results in idle resources, and idle resources are no longer acceptable in today’s cost conscious, waste-not want-not data centers. Organizations that have in place a NetApp solution for storage replication are in luck today, as F5 has a solution that can improve transfer rates by employing data reduction technologies: F5 BIG-IP WAN Optimization Module (WOM). One of the awesome advantages of WOM (and all F5 modules) over other solutions is that a BIG-IP module is a component of our unified application delivery platform. That’s an advantage because of the way in which BIG-IP modules interact with one another and are integrated with the rest of a dynamic data center infrastructure. The ability to leverage core functionality across a shared, high-speed internal messaging platform means context is never lost and interactions are optimized internally, minimizing the impact of chaining multiple point solutions together across the network. I could go on and on myself about and its benefits when employed to improve site-to-site transfer of big data, but I’ve got colleagues like Don MacVittie who are well-versed in telling that story so I’ll let him introduce this solution instead. Happy Replicating! NetApp’s SnapMirror is a replication technology that allows you to keep a copy of a NetApp storage system on a remote system over the LAN or WAN. While NetApp has built in some impressive compression technology, there is still room for improvement in the WAN space, and F5 BIG- IP WOM picks up where SnapMirror leaves off. Specialized in getting the most out of your WAN connection, WOM (WAN Optimization Module) improves your SnapMirror performance and WAN connection utilization. Not just improves it, gives performance that, in our testing, shows a manifold increase in both throughput and overall performance. And since it is a rare WAN connection that is only transferring SnapMirror data, the other applications on that same connection will also see an impressive benefit. Why upgrade your WAN connection when you can get the most out of it at any throughput rating? Add in the encrypted tunneling capability of BIG-IP WOM and you are more fast, more secure, and more available. With the wide range of adjustments you can make to determine which optimizations apply to which data streams, you can customize your traffic to suit the needs of your specific usage scenarios. Or as we like to say, IT Agility, Your Way. You can find out more about how NetApp SnapMirror and F5 BIG-IP WOM work together by reading our solution profile. Related blogs & articles: Why Single-Stack Infrastructure Sucks F5 Friday: Microsoft and F5 Lync Up on Unified Communications F5 Friday: The 2048-bit Keys to the Kingdom All F5 Friday Posts on DevCentral F5 Friday: Elastic Applications are Enabled by Dynamic Infrastructure Optimizing NetApp SnapMirror with BIG-IP WAN Optimization Module Top-to-Bottom is the New End-to-End207Views0likes1CommentImproving WAN VM transfer Speed with NetApp Flexcache and F5 BIG-IP WOM
That’s a mouthful, but this is just a quick blog to point you at the actual blog I guest wrote for our F5 Fridays series. In short, we’ve been toying with F5 BIG-IP WOM in the labs as a performance and distance enhancement tool for VMWare vMotion moves over the WAN when NetApp Flexcache is deployed. Pretty cool stuff, and while I wasn’t involved in all of the testing that went on, as the Technical Marketing Manager for WOM I did get to see the results as they rolled out of the lab. Take a read if you’re doing Long Distance VMWare transfers with vMotion, it’s well worth the five minutes of your life – Efficient Long Distance Transfer of VMs with F5 BIG-IP WOM and NetApp Flexcache. And next week we’ll return to my regularly scheduled meandering about IT management, storage, and WAN Optimization…164Views0likes0CommentsF5 Friday: Efficient Long Distance Transfer of VMs with F5 BIG-IP WOM and NetApp Flexcache
BIG-IP WOM and NetApp Flexcache speed movement of your VMs across the WAN. One of the major obstacles to the concept of cloud computing and “on-demand” is implementing the “on-demand” piece of the equation. Virtualization in theory allows organizations to shuffle virtual machine images of applications to and fro without the Big Hairy Mess that’s generally involved in physically migrating an application from location to another. Just the differences in hardware and thus potential conflicts between hardware drivers and the inevitable “lack of support” for some piece of critical hardware in the application can doom an application migration. Virtualization, of course, removes these concerns and moves the interoperability issues up to the hypervisor layer. That makes migration a much simpler process and, assuming all is well at that layer, mitigates many of the issues that had been present in the past with moving an application – such as ensuring all the right files and adapters and connections were with the application. It’s an excellent packaging scheme that migration as well as it does rapid provisioning. The problem, of course, has been in the network. Virtual images aren’t small by any stretch of the imagination while Internet connectivity has always been more constrained. Organizations did not run out and increase the amount of bandwidth they had available upon embarking on their virtualization journey and even if they did, they still have little to no control over the quality of that connection. So while it was possible in theory to move these packages of applications around to-and-fro, it wasn’t always necessarily feasible. Thus it is that solutions are appearing to address these problems to make it not only possible but feasible to perform migration of virtual images on-demand. NetApp Flexcache is just one such solution. Flexcache leverages data reduction and caching to ease the burden on the network of transferring such “big data”. Alone it is a powerful addition to vMotion, but it’s focused on storage, on the image, on data. It’s not necessarily addressing many of the core network issues that can cause a storage vMotion to fail. That’s where we come in because F5 BIG-IP WOM (WAN Optimization Module) does address those core network issues and makes it possible to successfully complete a storage vMotion across the WAN. Application migration, on-demand. Today’s F5 Friday is a guest post by Don MacVittie who, as you may know, keeps a close eye on storage and WAN optimization concerns and technologies in his blog. So without further adieu, I’ll let Don explain more about the combined F5 BIG-IP WOM and NetApp Flexcache solution for long distance transfer of virtual machines. VMWare vMotion allows you to transfer VMs from one server to another, or even from one datacenter to another, provided the latency between the datacenters is small. It does this in a two-step process that first moves the image, and then moves the running “dynamic” portions of the VM. Moving the image is much more intensive than moving the dynamic bits, as the image is everything you have on disk for the VM, while the dynamic part is just the current state of the machine. Moving the image is referred to as “storage vMotion” in VMWare lingo. NetApp Flexcache enhances the experience by handling the transfer of the image for you, making it possible to utilize Flexcache’s data reduction and cache refresh mechanisms to transfer the image for the vMotion system. While Flexcache alone is a powerful addition to vMotion, it does not address latency issues, and if the network is lossy, will suffer performance degradation as any application will. F5 BIG-IP WAN Optimization Module (WOM) boosts the performance and reliability of your WAN connections, be they down the street or on a different continent. In the datacenter-to-datacenter scenario, utilizing iSessions, two BIG-IP WOM devices can drastically improve the performance of your WAN link. Adding F5 BIG-IP WOM to the VMWare/Flexcache architecture provides you with latency mitigation techniques, loss prevention mechanisms, and more data reduction capability. as shown in this solution profile, a VMWare/Flexcache/WOM solution greatly increases the mobility of your VMs between datacenters. It also allows you to optimize all traffic flowing between the source and destination datacenters, not just the vMotion and Flexcache traffic. While the solution involving Flexcache (diagrammed in the above-mentioned Solution Profile) is more complex, a generic depiction of F5 BIG-IP WOM’s ability to speed, secure, and stabilize data transfers looks like this: So whether you are merging datacenters, shifting load, or opening a new datacenter, VMWare vMotion + NetApp Flexcache + F5 BIG-IP WOM are your path to quick and painless VM transfers across the WAN. Related blogs & articles: F5 Friday: Rackspace CloudConnect - Hybrid Architecture in Action F5 Friday: The 2048-bit Keys to the Kingdom All F5 Friday Posts on DevCentral F5 Friday: Elastic Applications are Enabled by Dynamic Infrastructure F5 Friday: It is now safe to enable File Upload F5 Friday: Application Access Control - Code, Agent, or Proxy? Oracle RMAN Replication with F5's BIG-IP WOM Don MacVittie - WOM Nojan Moshiri - BIGIP-WOM How May I Speed and Secure Replication? Let Me Count the Ways. WOM nom nom nom nom – DevCentral WOM and iRules - DevCentral228Views0likes0CommentsA Storage (Capacity) Optimization Buying Spree!
Remember when Beanie Babies were free in Happy Meals, and tons of people ran out to buy the Happy Meals but only really wanted the Beanie Babies? Yeah, that’s what the storage compression/dedupe market is starting to look like these days. Lots of big names are out snatching up at-rest de-duplication and compression vendors to get the products onto their sales sheets, we’ll have to see if they wanted the real value of such an acquisition – the bright staff that brought these products to fruition – or they’re buying for the product and going to give or throw away the meat of the transaction. Yeah, that sentence is so pun laden that I think I’ll leave it like that. Except there is no actual meat in a Happy Meal, I’m pretty certain of that. Today IBM announced that it is formally purchasing Storwize, a file compression tool designed to compress data on NAS devices. That leaves few enough players in the storage optimization space, and only one – Permabit – whose name I readily recognize. Since I wrote the blog about Dell picking up Ocarina, and this is happening while that blog is still being read pretty avidly, I figured I’d weigh in on this one also. Storwize is a pretty smart purchase for IBM on the surface. The products support NAS at the protocol level – they claim “storage agnostic”, but personal experience in the space is that there’s no such thing… CIFs and NFS tend to require tweaks from vendor A to vendor B, meaning that to be “agnostic” you have to “write to the device”. An interesting conundrum. Regardless, they support CIFS and NFS, are stand-alone appliances that the vendors claim are simple to set up and require little or no downtime, and offer straight-up compression. Again, Storewize and IBM are both claiming zero performance impact, I cannot imagine how that is possible in a compression engine, but that’s their claim. The key here is that they work on everyone’s NAS devices. If IBM is smart, the products still will work on everyone’s devices in a year. Related Articles and Blogs IBM Buys Storewize Dell Buys Ocarina Networks Wikipedia definition – Capacity Optimization Capacity Optimization – A Core Storage Technology (PDF)297Views0likes1CommentDell Buys Ocarina Networks. Dedupe For All?
Storage at rest de-duplication has been a growing point of interest for most IT staffs over the last year or so, just because de-duplication allows you to purchase less hardware over time, and if that hardware is a big old storage array sucking a ton of power and costing a not-insignificant amount to install and maintain, well, it’s appealing. Most of the recent buzz has been about primary storage de-duplication, but that is merely a case of where the market is. Backup de-duplication has existed for a good long while, and secondary storage de-duplication is not new. Only recently have people decided that at-rest de-dupe was stable enough to give it a go on their primary storage – where all the most important and/or active information is kept. I don’t think I’d call it a “movement” yet, but it does seem that the market’s resistance to anything that obfuscates data storage is eroding at a rapid rate due to the cost of the hardware (and attendant maintenance) to keep up with storage growth. Related Articles and Blogs Dell-Ocarina deal will alter landscape of primary storage deduplication Data dedupe technology helps curb virtual server sprawl Expanding Role of Data Deduplication The Reality of Primary Storage Deduplication221Views0likes0CommentsIf I Were in IT Management Today…
I’ve had a couple of blog posts talking about how there is a disconnect between “the market” and “the majority of customers” where things like cloud (and less so storage) are concerned. So I thought I’d try this out as a follow on. If I were running your average medium to large IT shop (not talking extremely huge, just medium to large), what would I be focused on right now. By way of introduction, for those who don’t know, I’m relatively conservative in my use of IT, I’ve been around the block, been burned a few times (OS/2 Beta Tester, WFW, WP… The list goes on), and the organizations I’ve worked for where I was part of “Enterprise IT” were all relatively conservative (Utilities, Financials), while the organizations i worked in Product or App Development for were all relatively cutting edge. I’ve got a background in architecture, App Dev, and large systems projects, and think that IT Management is (sadly) 50% corporate politics and 50% actually managing IT. I’ll focus on problems that we all have in general here, rather than a certain vertical, and most of these problems are applicable to all but the largest and smallest IT shops today. By way of understanding, this list is the stuff I would be spending research or education time on, and is kept limited because the bulk of you and your staff’s time is of course spent achieving or fixing for the company, not researching. Though most IT shops I know of have room for the amount of research I’m talking about below.166Views0likes0CommentsThe Problem With Storage Growth is That No One Is Minding the Store
In late 2008, IDC predicted more than 61% Annual Growth Rate for unstructured data in traditional data centers through 2012. The numbers appear to hold up thus far, perhaps were even conservative. This was one of the first reports to include the growth from cloud storage providers in their numbers, and that particular group was showing a much higher rate of growth – understandable since they have to turn up the storage they’re going to resell. The update to this document titled World Wide Enterprise Systems Storage Forecast published in April of this year shows that even in light of the recent financial troubles, storage space is continuing to grow. Related Articles and Blogs Unstructured Data Will Become the Primary Task for Storage Our Storage Growth (good example of someone who can’t do the above) Tiered Storage Tames Data Storage Growth says Construction CIO Data Deduplication Market Driven by Storage Growth Tiering is Like Tables or Storing in the Cloud Tier188Views0likes0Comments