Distributed Cloud Support for NAS Migrations from On-Premises Approaches to Azure NetApp Files

F5 Distributed Cloud (XC) Secure Multicloud Networking (MCN) connects and secures distributed applications across offices, data centers, and various cloud platforms.   Frequently the technology is web-based, meaning traffic is often carried on ports like TCP port 443, however other traffic types are also prevalent in an enterprise’s traffic mix. Examples include SSH or relational database protocols.


One major component of networked traffic is Network-Attached Storage (NAS), a protocol in the past frequently carried over LANs between employees in offices and co-located NAS appliances, perhaps in wiring closets or server rooms.   An example of such an appliance would be the ONTAP family from NetApp which can take on physical or virtual form factors.


NAS protocols are particularly useful as they integrate file stores into operating systems such as Microsoft Windows or Linux distributions as directories, mounted for easy access to files at any time, often permanently.   This contrasts with SSH file transfers, which are often ephemeral actions and not so tightly integral to host operating system health.


With the rise of remote work, often the NAS appliances see increasing file reads-and-writes to these directories, traversing wide-area links.   In fact, one study analyzing fundamental traffic changes due to the Covid-19 pandemic saw a 22 percent increase in file transfer protocol (FTP) in a single year, suggesting access to files has undergone significant foundational changes in recent years.

 

Distributed Cloud and the Movement towards Centralized Enterprise Storage

 

A traditional concern about serving NAS files to offices from a centralized point, such as a cloud-instantiated file repository, is latency and reliability.   With F5’s Distributed Cloud offering a 12 Tbps aggregate backbone and dedicated RE-to-RE links, the behavior of the network component is both highly durable and performant.   The efficiencies of a centralized corporate file distribution point, with the required 9’s of guaranteed uptime of modern cloud services, and the logic of moving towards cloud-served NAS solutions makes a lot of sense.   With on-premises storage appliances replaced by a secure, networked service eliminates the need to maintain costly spares, which are effectively a shadow NAS appliance infrastructure and onerous RMA procedures. All of this enables accomplishing the goal of shrinking/greening office wiring closets.


To demonstrate this centralized model for a NAS architecture, a configuration was created whereby a west coast simulated office was connected by F5 Distributed Cloud to Azure NetApp Files (ANF) instantiated in Azure East-2 region.   ANF is Microsoft Azure’s newest native file serving solution, managed by NetApp, with data throughputs that increase in lock step with the amount of reserved storage pool capacity.


Different quality of service (QoS) levels are selectable by the consumer.  In the streamlined ANF configuration workflow, where various transaction latency thresholds may be requested, even the most demanding relational database operations are typically accommodated.  Microsoft offers additional details on ANF here, however, this article should serve to sufficiently demonstrate the ANF and F5 Distributed Cloud Secure MCN solutions for most readers.

 

Distributed Cloud and Azure NetApp Files Deployment Example

 

NAS in the enterprise today largely involves use of either NFS or SMB protocols, both of which can be used within Windows and Linux environments and make remote directories appear and perform as if local to users.   In our example, a western US point of presence was leveraged to serve as the simulated remote office and standard Linux hosts to serve as the consumers of NetApp volumes.   In the east, a corporate VNET was deployed in an Azure resource group (RG) in US-East-2, with one subnet delegated to provide Azure NetApp Files (ANF).


To securely connect the west coast office to the eastern Azure ANF service, F5 Distributed Cloud Secure MCN was utilized to create a Layer 3 multi-cloud network offering.   This is achieved by easily dropping an F5 customer edge (CE) virtual appliance into both the office and the Azure VNET in the east.   The CE is a 2-port security appliance.   The inside interfaces on both CEs were attached to a global virtual network, and exclusive layer-3 associations to allow simple connectivity and fully preserve privacy.


In keeping with the promise of SaaS, Distributed Cloud users require no routing protocol setup. The solution takes care of the control plane, including routing and encryption.  This concept could be scaled to hundreds of offices, if equipped with CEs, and easily attached to the same global virtual network.


CEs, at boot-up, automatically attach via IP Sec (or SSL) tunnels to geographically close F5 backbone nodes, called regional edge (RE) sites.   Like tunnel establishment, routing tables are updated under-the-hood to allow for a turn-key security relationship between Azure NetApp File volumes and consuming offices.   The setup is depicted as follows:

Setup Azure NetApp Files (ANF) Volumes in Minutes

To put the centralized approach to offering NAS volumes for remote offices or locations into practice, a series of quick steps are undertaken, which can all be done through the standard Microsoft Azure portal.   The four steps are listed below, with screenshots provided for key points in the brief process:

 

  • If not starting from an existing Resource Group (RG), create a new RG and add an Azure VNET to it.   Delegate one subnet in your VNET to support ANF.   Under “Delegate Subnet to a Service” select from the pull-down-list the entry “Microsoft.NetApp/volumes”.
  • Within the Resource Group, choose “Create” and make a NetApp account.  This will appear in the Azure Marketplace listings as “Azure NetApp Files”.
  • In your NetApp account, under “Storage service” create a capacity pool.   The pool should be sized appropriately, larger is typically better, since numerous volumes, supporting your choice of NFS3/4 and SMB protocols, will be created from this single, large disk pool.
  • Create your first volume, select size, NAS protocols to support, and QoS parameters that meet your business requirements.

 

As seen below, when adding a capacity pool simply follow the numerical sequence to add your pool, with a newly created sample 2 TiB pool highlighted; 1,024 TiB (1 PiB) are possible (click image to enlarge).

 

Interestingly, the capacity pool shown is the “Standard” service level, as opposed to “Premium” and “Ultra”.  With QoS type of Auto selected, Azure NetApp Files provides increasing throughput in terms of megabytes per second as the number of TiB in the pool increases.   The throughput also increases with service levels; for standard, as shown, 8 megabytes per second per TiB will be allocated.


Beyond throughput, ANF also provides the lowest latency averages for reads and writes in the Azure portfolio of storage offerings.   As such, ANF is a very good fit for database deployments that must see constrained, average latency for mission-critical transactions.  Deeper discussion around ANF service levels may be explored through the Microsoft document here.


The next screenshot shows the simple click-through sequence for adding a volume to the capacity pool, simply click on volumes and the “+Add volume” button.   A resulting sample volume is displayed in the figure with key parameters highlighted.

In the above volume (“f5-distributed-cloud-vol-001”) the NAS protocol selected was NFSv3 and the size of the volume (“Quota”) was set to 100GiB.


Setup F5 Distributed Cloud Office-to-Azure Connectivity


To access the volume in a secured and highly responsive manner, from corporate headquarters, remote offices or existing data centers, three items from F5 Distributed Cloud are required:

  1. A customer edge (CE) node, normally with 2-ports, must be deployed in the Azure RG VNET.  This establishes the Azure instance as a “site” within the Distributed Cloud dashboard.  Hub and spoke architectures may also be used if required, where VNET peering can also allow the secure multi-cloud network (MCN) solution to operate seamlessly.
  2. A CE is deployed at a remote office or datacenter, where file storage services are required by various lines of business.   The CE is frequently deployed as a virtual appliance or installed on a bare metal server and typically has 2-ports.
  3. To instantiate a layer-3 MCN service, the inside ports of the two CEs are “joined” to a virtual global network created by the enterprise in the Distributed Cloud console, although REST API and Terraform are also deployment options.

By having each inside port of the Azure and office CE’s joined to the same virtual network, the “inside” subnets can now communicate with each other, securely, with traffic normally exchanged over encrypted high-speed IPSec tunnels into the F5 XC global fabric.


The following screenshot demonstrates adding the Azure CE inside interface to a global virtual network, allowing MCN connectivity to remote office clients requiring access to volumes.  Further restrictions, to prevent unauthorized clients, are found within NAS protocols themselves, such as simple Export policies in NFS and ACL rules in SMB/CIFS, which can be configured quickly within ANF.

Remote Office Access – Establish Read/Write File Access to Azure ANF over F5 Distributed Cloud

 

With both ANF configured and F5 Distributed Cloud now providing a layer-3 muticloud network (MCN) solution, to patch enterprise offices to the centralized storage, some confirmation of the solution working as expected was desired.   First off, a choice in protocols was made.  When configuring ANF, the normal choices for access are NFSv3/v4 or SMB/CIFS or both protocols concurrently.


Historically, Microsoft hosts made use of SMB/CIFS and Linux/Unix hosts preferred NFS, however today both protocols are used throughout enterprises. One example being long-time SAMBA server (SMB/CIFS) support in the world of Linux.


Azure NetApp Files will provide all the necessary command samples to get hosts connected without difficulty.   For instance, to mount the volume to a folder off the Linux user home directory, such as the sample folder “f5-distributed-cloud-vol-001”, per the ANF suggestion the following one command will connect the office Linux host to the central storage in Azure-East-2:

 

sudo mount -t nfs -o rw,hard,rsize=262144,wsize=262144,vers=3,tcp 10.0.9.4:/f5-distributed-cloud-vol-001 f5-distributed-cloud-vol-001

 

At this point the volume is available for day-to-day tasks, including read and write operations, as if the NAS solution were local to the office, often literally down the hallway.


Remote Office Access - Demonstration of Azure ANF over F5 Distributed Cloud in Action

To repeatedly exercise file writes from a west coast US office to an east coast ANF deployment in Azure-East-2 (Richmond, Virgina) a simple shell script was used to perpetually write a file to a volume, delete it, and repeat over time.  The following sample wrote a file of 20,000 bytes to the ANF service, waited a few seconds, and then removed the file before beginning another cycle.

At the lowest common denominator, packet analysis for the ensuing traffic from the western US office will indicate both network and application latency sample values.   As depicted in the following Wireshark trace, the TCP response to a transmitted segment carrying an NFS command, was observed to be just 74.5 milliseconds.  This prompt round-trip latency for a cross-continent data plane suggests a performant Distributed Cloud MCN service level.  This is easily seen as the offset from the reference timestamp (time equal to zero) of the NFS v3 Create Call. Click on image to expand.

Analyzing the NAS response from ANF (packet 185) arrives less than 1 millisecond later, suggesting a very responsive, well-tuned NFS control plane offered by ANF.


To measure the actual, write-time of a file from west coast to east coast, the following trace demonstrates the 20,000 byte file write exercise from the shell script.

In this case, the TCP segments making up the file, specifically the large packet body lengths called out in the screenshot, are delivered efficiently without TCP retransmissions, TCP zero window events, nor having any indicators of layer 3 and 4 health concerns.


The entirety of the write is measured at the packet layer to take only 150.8 milliseconds.


Since packet-level analysis is not the most turnkey, easy method to monitor file read and write performance, a set of Linux and Windows utilities can also be leveraged.   The Linux utility nfsiostat was concurrently used with the test file writes and produced similar, good latency measurements.

Nfsiostat monitoring of the file write testing, from west coast to east coast, for the 20,000-byte file, has indicated an average write time to ANF of 151 milliseconds.

 
The measurements presented here are simply observational, to present rapid, digestible techniques for readers interested in service assurance for running ANF over an XC L3 MCN offering.   For more rigorous monitoring treatments, Microsoft provides guidance on performing one’s own measurements of Azure NetApp Files here.

 

Summary


As enterprise-class customers continue to rapidly look towards cloud for compute performance, GPU access, and economies-of-scale savings for key workloads, the benefits of a centralized, scalable storage counterpart to this story exists.

 
F5 Distributed Cloud offers the reach and performance levels to securely tie existing offices and data centers to cloud-native storage solutions.   One example of this approach to modernize storage was covered in this article, the turn-key ability to begin transitioning from traditionally on-premises NAS appliances to cloud-native scalable volumes.


The Azure NetApp Files approach to serving read/write volumes allows modern hosts, including Windows and Linux distributions, to utilize virtually unlimited folder sizes with service levels adjustable to business needs. 

Updated May 30, 2024
Version 2.0
No CommentsBe the first to comment