Enterprise AI in Control: NetScaler AI Gateway’s Approach to Governed, Scalable AI Adoption

The Tolly Group
March 26, 2026
5 min read

The odds of an enterprise AI project succeeding are better than they were two years ago, but they are still not good. A major analyst firm projects that 35% of AI initiatives will be abandoned by 2029, a figure that has improved from a 60% abandonment estimate in 2024. That is meaningful progress, though it also underscores that deploying AI at scale remains genuinely difficult.

To understand how organizations can tilt the odds in their favor, The Tolly Group recently spoke with Brian Huhn, who manages Product Marketing for NetScaler and NetScaler AI Gateway at Citrix Systems Inc. The conversation centered on a challenge that often goes unaddressed: the gap between getting an AI model to work and getting it to work reliably, securely, and cost-effectively across an enterprise.

Why AI Projects Stall

Huhn is candid about the causes of failure. Governance gaps, unpredictable ROI, and operational inefficiency are the three culprits he encounters repeatedly, and they tend to reinforce one another.

AI projects carry significant costs at every level, and when organizations cannot clearly see how AI usage translates to business value, initiatives lose sponsorship. The trouble often starts before the budget conversation. Many enterprises build AI policies and guardrails application by application, which creates a scaling problem that eventually becomes unmanageable. As Huhn explained, "If you're building out guardrails and having to custom fit what your AI traffic rules are on a per-app basis, that can be very difficult to scale out. Having a central control is a huge saving on that OpEx side."

NetScaler AI Gateway addresses these gaps by sitting at the traffic layer and providing governance, cost visibility, and performance management from a single control plane. The aim is not just to deploy AI, but to run it as a sustainable business capability.

One Control Plane, Not a Patchwork

Most enterprises accumulating AI workloads end up with a fragmented stack: separate gateways, API managers, monitoring tools, and security layers. NetScaler AI Gateway consolidates those controls into the same software-defined platform already managing application and gateway traffic across the business.

Engineers work within one interface rather than learning multiple systems. IT organizations apply AI controls using infrastructure they already operate and trust, rather than standing up a parallel stack with its own costs and management burden. That consolidation also means every AI policy change, rate limit, or security update propagates from one place rather than being replicated across disparate tooling.

The Token Cost Problem

One of the more underappreciated cost drivers in enterprise AI is token consumption. Most organizations track request volumes but not the actual computational cost behind each request. That gap matters because LLM cost is directly proportional to token usage, not just the number of calls.

NetScaler AI Gateway manages traffic at the token level. IT teams can set rate limits by user or by application and access real-time usage data. Analytics can be exported to observability platforms including Splunk, Grafana, and Elasticsearch, so teams can tune policy over time and prevent cost spikes without blocking legitimate work. As Huhn put it, operational efficiency and ROI are two sides of the same coin.

MCP: The New Integration Standard

As AI agents move beyond conversational interfaces and into operational workflows, the question of how those agents connect to enterprise systems becomes pressing. The Model Context Protocol (MCP) is emerging as a standard answer, providing a consistent, structured way for agents to query data and trigger actions without requiring fragile custom integrations for every system.

NetScaler has released an MCP Server for NetScaler Console. It gives AI agents a governed entry point into operational data, allowing queries about deployment status, performance, or license details through a natural language interface without granting direct system access. Questions like "What is the performance looking like in my EMEA load balancers?" become answerable through an AI agent rather than requiring manual dashboard navigation.

The roadmap extends further. A future MCP Server for the NetScaler ADC itself is in development, which would allow agents to move from read-only queries to taking action on the ADCs directly. That progression, from information retrieval to automated execution, is where the real operational leverage of MCP will be realized.

Securing AI's New Attack Surface

LLMs and MCP servers are exposed through HTTP interfaces, which means they inherit traditional web risks alongside new AI-specific attack vectors. Prompt injection, where a malicious input manipulates a model to leak data or bypass controls, is one of the more significant threats organizations now face.

NetScaler's WAF has been updated with signatures specific to LLM endpoints, inspecting requests before they reach model or execution layers. For organizations seeking an additional layer of protection, NetScaler AI Gateway integrates with specialized vendors as well, including Enkrypt AI and its LLM Firewall. The approach reflects a practical philosophy: the WAF provides a solid foundation, and layering specialist tools on top strengthens overall security posture without displacing existing investments.

Practical Entry Points

Huhn identifies financial services, healthcare, and government as the verticals where AI Gateway delivers the most immediate value, particularly where regulatory requirements demand tight control over what models can produce and what data they can access.

Here are some practical use cases, showcasing the value this functionality can deliver: A healthcare organization deploying an LLM for appointment booking or test result explanations needs policy controls preventing the model from responding to unrelated queries. LLM redaction matters in both directions: blocking PII or PHI from entering prompts sent to external models and preventing models from returning protected information in responses. For regulated environments, that combination of prompt governance and data redaction is not optional. It is what makes deployment possible.

The Intersection of AI and Application Delivery

The application delivery space has been called a commodity for years, but AI is changing the calculus. NetScaler's position at the heart of the traffic pathway has always made it a critical part of the value chain. AI workloads do not change that fact. They reinforce it.

The convergence of AI orchestration, application delivery, and security is taking shape at the traffic layer, where decisions about routing, inspection, and policy are made in real time. Organizations that incorporate AI governance into that layer from the outset, rather than retrofitting controls after deployment, will find the path to scalable and measurable AI adoption considerably more straightforward than those that treat governance as a secondary concern.

Key Takeaways

  • A major analyst group projects 35% of AI projects will be abandoned by 2029, primarily due to governance gaps and unpredictable ROI

  • NetScaler AI Gateway provides centralized governance and user-level controls, optimized LLM performance, and rich analytics for visibility

  • Token-based rate limiting and analytics exports give teams the data needed to control AI spend without restricting productive use

  • MCP Server for NetScaler Console enables AI agents to securely query operational data through governed, structured interfaces

  • Updated WAF signatures extend application security principles to LLM endpoints and MCP servers, with optional integration of dedicated LLM firewall tools

  • Regulated industries including healthcare, financial services, and government are best positioned for immediate value from prompt governance and data redaction capabilities

Learn More

Visit netscaler.com for detailed information about NetScaler AI Gateway and connect with Brian Huhn on LinkedIn for deeper discussions about enterprise AI governance strategies.