What is an MCP Server?

Start Your 30-Day trial today!

Table of Contents

Cybersecurity 101 Categories

What is an MCP server?

An MCP server is a lightweight program that exposes data, tools, or capabilities to an AI assistant through the Model Context Protocol (MCP)-an open standard introduced by Anthropic in November 2024 that defines how AI models communicate with external systems.

Before MCP, connecting an AI assistant to an enterprise system-a database, a file repository, a business application- required custom integration code written separately for every combination of AI tool and data source. The result was a fragmented ecosystem of proprietary connectors, inconsistent security implementations, and significant maintenance overhead. MCP solves this through standardization: any MCP-compatible AI can connect to any MCP server without custom integration code, using the same universal protocol regardless of what system sits on the other end.

The analogy most commonly used is USB-C for AI, a single, universal interface that replaces the need for proprietary connectors between AI tools and the real-world systems they need to be useful. Amazon, Microsoft, Google, and OpenAI have all adopted MCP, making it the de facto connectivity standard for agentic AI in enterprise environments.

How does an MCP server work?

MCP uses a three-part architecture built around a host, a client, and one or more servers. Understanding how these components interact is essential to understanding both the capabilities and the security implications of the protocol.

The three components:

  • MCP Host — the application the user interacts with directly, such as Claude Desktop, an IDE like Cursor, or an enterprise AI assistant. The host manages the overall environment, mediates between the AI model and the MCP client, and controls which servers the client is permitted to connect to.
  • MCP Client — a component that lives inside the host application and manages the connection to a specific MCP server. One host can run multiple clients, each connected to a different server simultaneously.
  • MCP Server — the program that sits on the other side of that connection, exposing specific capabilities to the AI. The AI model itself never communicates directly with the MCP server; all communication is mediated through the host and client layer.

What MCP servers expose:

MCP servers offer three types of capabilities to connected AI clients:

  • Tools — executable functions the AI can invoke to take action, such as running a database query, sending a message, creating a file, or calling an external API. Tools represent active, potentially state-changing operations.
  • Resources — data or content the AI can read, such as files, database records, documentation, or live data feeds. Resources provide context without necessarily triggering an action.
  • Prompts — pre-built prompt templates stored on the server that help the AI understand how to interact with the connected system. These can include guardrails, formatting instructions, or domain-specific context.

How a request flows through the system:

When a user asks an AI assistant a question that requires external data or action – “What is the current status of this support ticket?” or “Run this SQL query against the production database”- the request flows from the host to the MCP client, which queries the relevant MCP server. The server checks what the user has access to, retrieves or executes the appropriate resource or tool, and returns a structured response. The host passes that response to the AI model, which incorporates it into its reply.

This architecture is what makes modern agentic AI possible-AI systems that do not just generate text based on training data, but reason about real-world context, retrieve live information, and take autonomous actions across connected systems. An MCP-enabled AI can run SQL queries against a live database, read files from a local filesystem, create and update records in a business application, and chain multiple tool calls together in a single workflow — all through natural language instructions from the user.

Transport mechanisms:

MCP supports two primary transport methods for communication between clients and servers. The first is Standard Input/Output (stdio), used for local MCP servers that run on the same machine as the host application. The second is HTTP with Server-Sent Events (SSE), used for remote MCP servers that are accessed over a network. Local servers run as subprocesses and communicate via direct I/O; remote servers expose HTTP endpoints and maintain persistent connections for streaming responses. The choice of transport has direct security implications — local servers have a smaller network exposure but can still execute arbitrary code, while remote servers introduce network-level attack surface and authentication requirements.

What security risks do MCP servers introduce?

MCP’s design prioritizes ease of integration and broad compatibility-and it explicitly does not enforce security at the protocol level. The official MCP specification states that security responsibility rests entirely with the implementation, not the protocol itself. This architectural choice makes MCP fast and flexible to adopt, but it also means that organizations deploying MCP servers inherit the full burden of securing them, often without clear guidance on how to do so.

The security community has identified several distinct risk categories that warrant serious attention.

Prompt injection and tool poisoning

Because MCP servers can store prompt templates and expose them to the AI model, a compromised or malicious MCP server can inject instructions into the AI’s reasoning process that the user never sees and never approved. A malicious server could instruct the AI to write insecure code, ignore certain user requests, exfiltrate data to an external endpoint, or modify database records without user consent-all while presenting a normal-looking response to the user.

Indirect prompt injection is a related and particularly insidious variant. An attacker does not need to compromise the MCP server directly; they can embed malicious instructions inside content the AI is asked to process — an email, a document, a web page. When the AI reads that content through an MCP-connected tool, the hidden instructions are executed as if they came from the user. Traditional security boundaries between viewing content and executing actions break down entirely in this model.

The confused deputy problem

MCP servers typically authenticate with external services using their own credentials-OAuth tokens, API keys, service account permissions- rather than passing through the individual user’s identity. When the server executes an action in response to a user request, it does so using its own elevated permissions, not the user’s permissions. If access controls are not implemented correctly, this creates a confused deputy vulnerability: a lower-privileged user can trigger a higher-privileged action by routing a request through the MCP server, effectively escalating their own access.

The MCP protocol does not inherently carry user context from the host to the server, meaning the server may have no mechanism to differentiate between users and applies the same access level to all requests it receives. This makes least-privilege enforcement both critical and difficult.

Token and credential aggregation

MCP servers are high-value targets precisely because they aggregate authentication tokens for multiple services in one place. A single compromised MCP server may hold OAuth tokens for email, file storage, databases, calendars, code repositories, and business applications simultaneously. An attacker who gains access to those tokens can operate across all connected services appearing as legitimate API activity — without triggering the suspicious login alerts that a traditional credential compromise might generate.

Supply chain risk

Anyone can build and publish an MCP server. With hundreds of open-source MCP servers available for download, the supply chain risk mirrors what the industry has seen with browser extensions and npm packages: malicious actors publish servers that impersonate legitimate integrations, either through typosquatting or by seeding seemingly useful servers with hidden malicious functionality. The first confirmed malicious MCP package appeared in September 2025 and operated undetected for two weeks while exfiltrating email data. Unlike a standard software package, an MCP server runs with access to live credentials and connected enterprise systems-the blast radius of a compromised package is substantially larger than a typical dependency vulnerability.

Insufficient logging and auditability

The current MCP ecosystem largely lacks standardized audit logging. Without a complete, verifiable record of every tool invocation, data access, and action taken by an AI agent through an MCP server, organizations cannot reconstruct what happened during a security incident, cannot demonstrate compliance with data governance requirements, and cannot hold anyone accountable for AI-driven actions. This is a significant gap in highly regulated industries where data access must be auditable by design.

How can organizations use MCP servers safely?

Securing MCP implementations is achievable, but it requires treating MCP servers as production infrastructure with enterprise-grade controls—not as developer conveniences that can be deployed and forgotten.

Apply least-privilege permissions rigorously

Every MCP server should be granted only the minimum permissions required to perform its defined function. Service accounts used by MCP servers should be scoped tightly, credentials should be short-lived where possible, and permissions should be reviewed regularly as the server’s role evolves. Over-permissioned MCP servers are one of the most common and consequential misconfigurations in current deployments.

Enforce strong authentication and transport security

Remote MCP servers must require authenticated connections using OAuth 2.0 or equivalent, with credentials stored outside the AI model’s context window. All communication between clients and remote servers should use TLS. For server identity verification, cryptographic signatures allow clients to confirm they are connecting to a legitimate server rather than an impersonator.

Validate and sanitize all inputs

MCP servers that execute commands or construct queries based on AI-generated inputs are vulnerable to injection attacks — both from malicious user inputs and from prompt injection in retrieved content. Input validation and sanitization before any command execution or API call is a baseline requirement, not an optional hardening step.

Require human approval for sensitive actions

For high-risk or irreversible operations—deleting records, sending external communications, modifying access controls, transferring data—MCP workflows should require explicit human confirmation before the action is executed. Autonomous AI action is powerful precisely because it removes friction; that same frictionlessness makes it dangerous for operations where a mistake cannot be undone.

Audit all MCP servers before deployment

Third-party and open-source MCP servers should be reviewed for security vulnerabilities before being connected to enterprise systems. This includes inspecting source code for malicious behavior, verifying the integrity of dependencies, and confirming that the server implements appropriate access controls. MCP servers should be tracked in the organization’s software inventory and included in vulnerability management processes like any other piece of production software.

Centralize governance and monitoring

Organizations deploying multiple MCP servers should implement a centralized gateway that proxies all MCP communication, enforces an allowlist of approved servers, centralizes access control, and provides a single point of visibility for monitoring and logging. Without centralized governance, MCP servers proliferate across departments without consistent controls, creating blind spots that are difficult to discover and even harder to remediate after the fact.

MCP servers represent one of the most significant infrastructure shifts in enterprise AI adoption—quietly becoming the layer through which AI tools access the systems, data, and credentials that organizations depend on. The protocol’s power is real, and so are its risks. Organizations that treat MCP governance as an afterthought are creating an attack surface they cannot see and may not discover until an incident forces the issue. Approached deliberately, with proper access controls, auditing, and supply chain diligence, MCP can be deployed safely-but the protocol will not do that work for you.

Try Portnox Cloud for free today

Gain access to all of Portnox’s powerful zero trust access control free capabilities for 30 days!

WEBINAR: Next Generation ZTNA (April 16 @ 12pm ET)

X