MCP (Model Context Protocol) is a protocol that allows an AI agent to work not only with its own knowledge but also with external tools. For example, it enables the agent to perform web searches, work with documentation, connect to services such as Notion, manage a browser, and use the obtained results when generating a response for the user.
An MCP server acts as an intermediary between the AI agent and external services. The agent itself does not know how these services are structured or how to interact with them. However, it knows that the MCP server provides a set of available tools that it can use.
When an AI agent connects to an MCP server, the first thing it does is request the list of available tools. In response, the server returns a description of its capabilities: function names, their purpose, the parameters they accept, and the format in which they return results.
At this stage, a contract is established between the agent and the MCP server. The agent understands which actions are available and can choose which tool to use in a particular situation.

This list of tools can also be retrieved manually using a regular HTTP request. For example, the exa search MCP server, which provides the AI agent with access to web search, allows you to request the list of tools like this:
curl -s https://mcp.exa.ai/mcp \
-H 'Content-Type: application/json' \
-H 'Accept: application/json, text/event-stream' \
-d '{ "jsonrpc": "2.0", "id": 1, "method": "tools/list" }' \
| sed -n 's/^data: //p' \
| jq
In response, the server returns a list of tools. From this list, you can see which functions are available, what they are intended for, and which parameters they expect. For example, one tool is responsible for web search, another for company research, and a third for searching code and documentation. For each tool, the required input data is explicitly described.
It is important that the AI agent does not “guess” these parameters or invent the request format. It works strictly according to the description received from the MCP server.
After connecting and retrieving the list of tools, the AI agent keeps in mind which tools are available and in which situations they can be used.
A user asks the AI agent a question. If the agent can answer immediately, it simply generates a response. However, if there is not enough information, the agent understands that it needs to use an external tool and sends a request to the MCP server.
You can explicitly instruct the agent to use tools. For example, when connecting Context7, you can add “Use Context7” at the end of the prompt, and the agent will use the tools of that MCP server instead of generating information.
Let’s consider interaction with an MCP server using exa search as an example. A user asks the AI agent a question. If the agent realizes that it cannot answer immediately—for example, because it lacks up-to-date information—it decides to use the search tool and sends a request to the MCP server. The MCP server performs the search function in exa search and returns a structured result to the AI agent. The agent then reads the results, selects the key information, and generates a response for the user based on this data.

The MCP server does not respond directly to the user and does not make decisions. It performs specific functions and returns results. All logic, data interpretation, and response generation remain on the AI agent’s side.
There are two types of MCP servers: stdio and Streamable HTTP. They can roughly be divided into local (stdio) and remote (Streamable HTTP).
Only remote servers—Streamable HTTP—can be connected to agents.
To add a new connection:
After that, the new connection will appear in the list of tools.

To connect a server to a specific agent:


To detach an MCP server from an agent, click Delete MCP Server in the same window.
Multiple MCP servers can be connected to a single agent. One connection can be used by multiple agents.
After connecting an MCP server, the agent will use its functions both when working through the OpenAI-compatible API and in the widget interface.
When using the OpenAI-compatible API, connected MCP servers are applied only if the tools parameter is not explicitly passed in the request.
Streaming data transfer (SSE) is not currently supported for agents with connected MCP servers.