Bolster AI Queries - Access to NetApp-Instantiated Corporate Databases via MCP and Distributed Cloud

Since its release in November 2024, Model Context Protocol (MCP) and its ability to enable richer AI outcomes with consideration of additional data sources has been a red hot topic.   Unlike Retrieval-Augmented Generation (RAG) solutions, which provide suggested additional datapoints for an AI large language model (LLM) to consider, MCP takes another complementary approach.   MCP, instead, opens up various tools for up-to-date data grabbing, think a weather forecast tool or a real-time search engine query tool, to be considered by the LLM before generating answers.


An interesting use case for MCP would be to open up tool access into corporate databases, where tables covering product inventories, pricing details, and suppliers might exist.   The allows for, as an example, an internally accessible LLM to provide employees with much richer, far more tactical answers.


In this article, we will, as a simple example, demonstrate how a sample relational database in one city, with tables existing on NetApp storage, can be securely accessed by the HTTP-based transactions of MCP through F5 Distributed Cloud (XC).  In this case, there can easily be a geographic isolation between the AI tools provided by AI teams, and long-established, distributed corporate databases to be leveraged.


In this example, the LLMs are controlled from a San Jose office, and the corporate relational databases are elsewhere.   In our scenario, we will use PostgreSQL, frequently just called Postgres, in a Seattle-area office and instantiated on a NetApp ONTAP Select appliance.   As we discovered, the F5 distributed load balancer, part of the AppConnect offering in XC, will ensure only our AI LLM and its MCP client can reach across the cloud service to interact with the potentially sensitive corporate data housed in Postgres.

 

 

MCP Client and Servers: Co-Located vs Networked

With the release of the original Anthropic MCP specification in November of 2024, a number of MCP clients entered the market, generating great interest.   Claude Desktop, for Windows and Mac, is a popular option. It connects Anthropic cloud-based LLMs to your MCP server-provided tools. The included MCP client component is the glue for this enhancement of LLM responses.   Various integrated development environment (IDE) MCP clients also hit the market, solutions like Cursor or Windsurf.   Due to the simple graphical user interface and wide embrace of Claude Desktop that was the platform used in this exercise.


The MCP protocol fundamentals are discussed in numerous on-line articles. One quick-start can be found in the solution’s standard documentation, found here as a sixty-second briefing and this documentation which goes into the details.


This article is about networking in MCP. It focuses on connecting MCP clients and servers, both securely and in a highly efficient environment.   The first iteration of MCP in November 2024 spoke to two methods for communications, stdio (standard input/output) and SSE (Server-Sent Events).   The first is still the only natively supported approach in Claude Desktop’s community edition, and expects to find the MCP server component, which offers access to rich tools, somewhere within the same host as Claude Desktop itself.   This is a quick and easy way to get going with lab projects, but a truly networked approach, one where an MCP server can support hundreds of MCP clients, is more interesting for production environments.

 

 

MCP Remote Access - Server-Sent Events (SSE) and Streamable HTTP

The November MCP release spoke to the server-sent event approach to binding MCP clients and servers, namely two separate HTTP/S endpoints.   The first endpoint, set up in the MCP client-side setup, will send a GET to the server. The response will give another endpoint for all future client transactions to use, in the form of HTTP Posts. 


The interesting aspect is the first connection is a long-lasting SSE connection after the first GET.   In other words, the server will hold it open, and only when the server arbitrarily decides it has data to update the client with will new communications (“events”) occur on this socket, perhaps updating the list of tools available or updating tool usage instructions.


The client, on the other hand, will use HTTP POSTs interactively at its discretion. This is conventional HTTP in that the client remains in charge of transactions, for instance, providing data from the LLM and requesting a tool act upon that data.


An issue with the SSE (“first”) connection is that network solutions often do not support extremely long-lasting TCP connections that may not have frequent data events, and should the connection be reset, the SSE channel is not easily resumed, as connection setups are by design in the direction of clients to servers.


The following protocol trace demonstrates the server-to-client flow of traffic that SSE sees over time. Notice the steady state is simply events, asynchronously generated by the MCP server (double-click to enlarge image).

For a number of reasons, the March update to MCP introduced “Streamable HTTP” which provided some more flexibility in networking solutions.   For one thing, unlike the original approach, a single API endpoint may now be used, not separate endpoints for the initial GET and subsequent POSTS.  

The network traffic may be simplified to the point where a MCP server-facilitated tool, perhaps something like a simple “calculator” function, sees the MCP server provide a response and close the single connection.   Should the MCP server reveal more complex tools, perhaps requiring many seconds to complete a request, a statefulness effect is achieved through a server-provided message identifier value.   The MCP client may check in on progress at its discretion or after a network impairment, such as a transient home office Wi-Fi blip.


The rationale for not staying with a hard, across-the-board interpretation of the original SSE November 2024 specification includes the drain on servers supporting potentially hundreds of MCP clients; each persistent long-term connection has a computational cost.   Another factor is the memory cost of networking equipment tracking the state of numerous, often dormant TCP connections.

 

 

MCP Support - Distributed Cloud Provides AI Access to Remote Databases

Databases are among the most critical of enterprise resources; just imagine losing access to customer lists or supplier contacts, let alone the thought of non-authorized tools gaining access to these data goldmines.   To enable AI LLMs and MCP clients safely in one site to consider the database table contents in another site, towards the goal of answering employee queries, we have harnessed F5 Distributed Cloud.   Specifically, using HTTP/HTTPS distributed load balancers, we can project access to a database MCP server across a secure network to an MCP client supporting Claude Desktop.

At the time of this writing, Claude Desktop community edition is limited to local MCP servers, through stdio.   As such, a stdio-to-SSE proxy based in Python was invoked to allow networked MCP traffic.   An MCP server found here was utilized to leverage a Postgres database, exposing remote MCP-enabled tools to magnify the efficacy of the AI solution.


The following images show some sample tables of an enterprise database located in a Seattle office, including the logically named product and supplier tables that are linked and provide sample data enabling day-to-day operations of fictitious boutique apparel distributor.   Due to the criticality of databases, many will be instantiated not on direct attached storage but rather on an enterprise-grade NAS or SAN solution.   In this example, the Postgres database tables are stored on NetApp ONTAP appliance volumes in the Seattle branch.

In our simple example, the product table contains merchandise, including selling price, inventory, and supplier ID values.   The supplier ID values are mapped to supplier names in another database table.

 

 

Allowing AI to Remotely and Securely Access our Enterprise Data

Our topology for this setup looked like the following.  In our case, NFS was used as the protocol to leverage ONTAP.

The HTTP load balancer was easy to set up in the F5 Distributed Cloud console. The origin pool in the above topology would be the Seattle-area (Redmond, Washington) office, on a Ubuntu server, which used TCP port 8080 locally.  The ability to isolate the MCP server/database and the San Jose Claude Desktop, in this approach, relies upon a customer edge (CE) node being implemented in both the Redmond and San Jose offices, CEs in Distributed Cloud are frequently just called “sites” for simplicity.

The HTTP distributed load balancer that will publish the service availability out of the inside interface of the San Jose site (CE node) is shown below.   Many domain names can be associated with the service; in this case, Claude Desktop’s MCP client will reach out to “ubuntu-mcn-sg-1” on TCP port 8080.

XC could easily share this service to different places as the company chooses, including the whole DNS/Internet. But in this case, we only want the service used by our Claud Desktop.   As such, we will only make a specific San Jose subnet, reachable from the inside CE interface as a consumption point for the MCP service.

Only one change to how the XC load balancer works was made. The MCP server used was made to work with the original SSE specification.   As such, there can be longer duration idle times between MCP server event messages intended for transmission to the MCP client.   As seen below, we have adjusted the XC load balancer to support idle periods of up to 90 seconds on the SSE connection before shutting down connections.

As the article mentioned earlier, streaming HTTP is becoming the networking approach of the future. Stateless and stateful approaches to MCP are coming online, which can avoid the need for long-lasting connections.

 

 

Illustrated Examples of MCP-enabled AI Securely Leveraging NetApp Enterprise Databases

Using the chat interface of Claude Desktop, we configure the MCP setup in “Settings”.   Note that using the stdio-to-SSE proxy, we simply need to provide the domain name of the Seattle-area MCP server, the local TCP port to use and the API endpoint (in this case “/sse”).

At this point we are free to use the solution; Claude Desktop augmented with the toolsets of the discovered MCP server.   In this case, support for querying Postgres database tables becomes realized and employees now have an AI “speech-to-SQL” experience that leverages their enterprise data.

It was not necessary to provide Claude Desktop with hints that the answers would not be found within its trained data, it knew enough to utilize the MCP protocol and then how to act upon the discovered MCP tools.

If we examine the packet trace, the SSE channel carries the tools that Claude utilizes above.   The first image below shows the tools described in raw ASCII, highlighted in yellow.   The ensuing image shows that when decoded in a JSON viewer, we can see there are six tools listed, with some fields of interest highlighted.

 

 

Observations on Networked MCP

MCP is widely discussed hot-button topic; new articles are published on-line weekly.  The following simply serves as a high-level overview.   When using a tool like Claude Desktop, the MCP client portion is pre-packaged, the result is to supplement AI, and one only needs to be concerned with providing the MCP server component.   Sample MCP servers are widely available, two broad and interesting repositories can be found both here and here.

Three potential tasks, in no particular order, of an MCP server are:

  • Providing prompt templates that a client may invoke, allows users and LLMs to collaborate efficiently.  Potential prompts are provided with pre-filled default values.
  • Allow access to resources; think of static files or other non-dynamic content.
  • The most discussed, tool access and tool usage instructions.   This can allow an AI to be complemented with up-to-date data, today’s high temperatures in Frankfurt, or access to proprietary data such as enterprise sales reports, as just two simple examples.

Let us examine one single annotated MCP tool invocation from the Claude Desktop MCP client (double click to enlarge).

  1. We observe in this tool usage, the MCP client POSTs a request to the /message endpoint and utilizes a “sessionID” value assigned by the MCP server upon the original MCP connection protocol setup (to /sse endpoint).
  2. The tool command is carried as payload (“method”:”tools:call” and in this particular example, references to the “query_postgres1” name and a list of arguments to shape the tool’s usage.
  3. The MCP server, after interaction with the Postgres database, has returned in json format data directly taken from the sales database with inventory and pricing fields.

The above packet trace, seen through the opensource Wireshark utility leverages lib pcap files to analyze raw packet traffic for richer understandings of protocols.   Since F5 Distributed Cloud is a full in-line proxy, this is advantageous; the solution itself can be a wealth of capture points for analysis.   Every customer edge (CE) node has the built-in ability to generate lib pcap files using the built in tcpdump utility, something NETOPS likely can use regularly. 


The following shows the simple workflow for generating packet captures. In this case, we are simply trying to capture health check traffic in the Redmond, WA, office that ensures our MCP server is up and responsive.   Our health checks are using TCP port 80, and we have asked for 20 packets over a maximum of 120 seconds to be captured.   All is healthy as the checks are soliciting 200 Okay messages from the server (double-click to enlarge).

General, “at a glance” monitoring of MCP traffic is also available in the HTTPS load balancer dashboard.   Here we see over time traffic summary, including rich details showing the HTTP verb MCP used (GET or POST), the response code and the valuable latency numbers for each transaction.

 

 

Summary of MCP Findings

In the case of Claude Desktop, with its integrated MCP client, very general inquiries led it automatically relying upon the MCP tools available to generate meaningful answers.   Without mentioning MCP in the AI chatbot query, the solution made use of the discovered tools and database tables that were harnessed to answer product questions.   The solution was also able to impart knowledge from separate tables through items like supplier ID columns to answer user requests correctly with data spread across tables.


The MCP server that was used supports both Postgres and MySQL, Postgres was investigated strictly based upon a larger installed base within modern enterprise.   MCP servers exist for semi-structured and unstructured databases, for example MongoDB.


To create a NetApp instantiated Postgres deployment, the general steps followed were:

  • Install Postgres on Ubuntu.  Instructions at https://www.postgresql.org/download/linux/ubuntu/, leading to #sudo apt -y install postgresql
  • Database management may be easier using the pgadmin GUI, which can be downloaded at https://www.pgadmin.org/download/pgadmin-4-apt/
  • Provision the NFS export on the ONTAP appliance; this is where the database contents will live and be secured.
  • Ensure the NFS mount point is persistent across reboots, /etc/fstab - nfs_server_ip:/nfs_share_path /mnt/nfs_postgres nfs defaults 0 0
  • After stopping the initial automatic start of Postgres, change the data directory: etc/postgresql/<version>/main/postgresql.conf) in postgres to utilize a data_directory of '/mnt/nfs_postgres' (or whatever the mount path is)
  • Set permissions of postgres system user to allow data access and restart postgres

 

Although this article demonstrates F5 Distributed Cloud HTTPS load balancers for secure, remote MCP communications, one could also use the F5 Distributed Cloud Network Connect module to allow secure layer 3 connectivity between MCP clients and servers.   In this article, the MCP client in San Jose could then have reached the distant Seattle MCP server and Postgres solution over the shared cloud global fabric. The routing table updates were set up automatically.  

 

One added benefit for solutions not implemented with streamable HTTP but rather the original SSE specification, such as in this article, the long-lasting SSE connection would stay established with no timeout adjustment; the solution is at layer three and not concerned with lengthy layer 4 connections.

Published Jun 02, 2025
Version 1.0