How We Integrated Model Context Protocol (MCP) into Our Django App
MCPs work like magic. Internally we use them relentlessly inside Cursor, for Linear issues in particular. We decided to ship an MCP server with Agent Interviews mainly because it made sense for us to have it on our own product for testing, before we even provided it to our customers. We built it quickly and made choices-of-least-resistance so there may be better ways to do everything. This is why we wanted to share our experience, would love to hear your feedback.
So the headline is we decided to implement an MCP server, Model Context Protocol (MCP), in our Django application, built on top of our existing API endpoints, and get it working with Cursor and Claude 3.7.
Our Starting Point
First, let me set the scene of our stack.
- Frontend with React and TypeScript, backend runs Django with Daphne ASGI, with Django REST Framework
- Locally, we use Docker Compose for development
- In production, we deploy to AWS ECS with multi-container task definitions with separate clusters for frontend, backend, and celery workers
So far so good!
Why We Picked FastMCP
After digging through various MCP implementations, we decided to go with FastMCP.
- It felt naturally Pythonic, which fits nicely with our Django stack
- The API seemed simple compared to alternatives
- It's actively maintained, and we expect MCPs to change quickly
We immediately had a package conflict which we didn't resolve, seems like something with Pipecat and Websockets, but in the end we were able to work around it by not putting the package into our main image.
Definitely happy with the choice so far, particularly once we figured out how to use the all important pydantic annotations to make the MCP tools descriptions visible in Cursor.
The Integration Journey
Where Should This Thing Live?
This was quite contentious. FastMCP is an ASGI app. We considered embedding the ASGI service directly in our existing backend with routing to each ASGI app, there are approaches documented here (FastMCP ASGI Deployment). Then we looked at it as a python manage.py command, running in the same application but separately. Neither of these seemed to work well, raising conflicts and other issues. We also realized that since we planned to wrap our API endpoints as MCP tools, we didn't need to worry about including the MCP server in the same application.
So instead we went with a separate service, still based on our production ECR image, but running in its own container. So we:
- Set up MCP server as its own runner in both Docker Compose and ECS task definitions
- Pointed the MCP server at our existing API endpoints with a simple
mcp.run()
- Used the direct command
pip install fastmcp && fastmcp run server.py
in the container - Added a health endpoint with
@mcp.custom_route("/health", methods=["GET"])
This follows the core instructions on the FastMCP GitHub. This separation seemed to do the trick, though its likely limiting down the road.
Tool Authentication
One issue was with authentication. We wanted to use our existing API keys and to be able to make it easy for our users to use their API key in the same way for the API and the MCP server. But the MCP server needs to be able to call the API first for the session and then secondly for the tool usage. To test we scripted a simple MCP client to access the server. The significant part of that is two async with
constructors. First we create a client, then a session, then run the tools query and tool usage. This also seems to be how the cursor implementation works.
import asyncio
from mcp.client.session import ClientSession
from mcp.client.streamable_http import streamablehttp_client
API_KEY = "your_api_key_here"
async def run_client():
headers = {"Authorization": f"Api-Key {API_KEY}"}
async with streamablehttp_client(
"https://api.agentinterviews.com/mcp/", headers=headers
) as (read, write, get_session_id):
async with ClientSession(read, write) as session:
projects_result = await session.call_tool("agentinterviews_list_projects")
print(projects_result)
asyncio.run(run_client())
However, it wasn't clear initially how to get the API key to the tool call. It was tempting to add it as a tool parameter but that created all kind of side effects. So instead we created a helper function to extract the API key from the HTTP request. I assume there is a better way to do this, but this works for now.
# Helper function to extract API key from HTTP request
def get_api_key(context: Context) -> str:
http_request = context.get_http_request()
if (
http_request
and hasattr(http_request, "headers")
and "authorization" in http_request.headers
):
auth_header = http_request.headers["authorization"]
if auth_header.startswith("Api-Key "):
api_key = auth_header.replace("Api-Key ", "")
return api_key
return ""
To make our MCP integration work well with Cursor, we initially tried using docstrings to pass in the description (which does work), but then discovered that the description
parameter in the tool decorator and Pydantic annotations provide much better visibility. We also adopted a naming convention with the agentinterviews_
prefix for all our tools to avoid conflicts with other MCPs, this seems to be how linear are doing it.
from typing import Dict, Any, Annotated
from pydantic import Field
@mcp.tool(description="List all interviewers in the system.")
async def agentinterviews_list_interviewers(
context: Context,
params: Annotated[
Dict[str, Any], Field(description="Filter parameters to narrow down results")
] = {},
limit: Annotated[
int, Field(description="Maximum number of results to return (default: 10)")
] = 10,
) -> Dict[str, Any]:
api_key = get_api_key(context)
... hit API endpoint with api_key
Using Pydantic annotations provides better tooling support in Cursor and makes the parameter descriptions more visible in the MCP tooltip. The prefix naming convention helps prevent conflicts when multiple MCPs are enabled simultaneously that have similarly named objects. The annotation for params is limited at the moment, there is presumably a better way to auto generate this but since the api returns the fields anyway, the tool can use the api response to determine the fields.
You can do much more here with ToolAnnotations and Tagging, this is probably best to be done when fixing issues with the actual usage.
Authentication Headaches in Cursor
It took a while to get this working with Cursor. We ran into authentication issues, particularly with header formatting - note the lack of spaces in Authorization:${API_KEY}
.
Important Note for Windows Users: Cursor and Claude Desktop on Windows have a known bug where spaces inside args aren't escaped when it invokes npx, which ends up mangling these values. You can work around it using:
{
"mcpServers": {
"AgentInterviews": {
"command": "npx",
"args": [
"-y",
"mcp-remote@latest",
"https://api.agentinterviews.com/mcp",
"--header",
"Authorization:${AUTH_HEADER}"
],
"env": {
"AUTH_HEADER": "Bearer <auth-token>"
}
}
}
}
They ship updates so frequently that these issues are likely fixed by now, but it's worth keeping in mind if you run into authentication problems!
This is the "installation" configuration that we are left with, using mcp-remote to run the MCP server, here on npm (mcp-remote).
{
"mcpServers": {
"AgentInterviews": {
"command": "npx",
"args": [
"-y",
"mcp-remote@latest",
"https://api.agentinterviews.com/mcp",
"--header",
"Authorization:${API_KEY}"
],
"env": {
"API_KEY": "Api-Key <key>"
}
}
}
}
Running with it
Here is a quick video getting our latest AI interviewer status from the MCP server, checking in on our latest AI interview.
Future Plans
For an AI to access what it needs well, in our experience, it needs to be able to recursively look through the API's context, checking lists and going back and forth across objects with various endpoints. For this the MCP server should probably have a really flexible search endpoint with usable params and strings. So we may well soon make that happen.
Key Takeaways
- We ran the MCP server in its own container: Run the MCP server in its own container rather than embedding it in the Django app
- Authentication for the tools: We extract and pass API keys from MCP context to each tool call to existing API endpoints
- Cursor configuration quirks: Be aware of header formatting issues when setting up Cursor with your MCP server
- API wrapping works well: FastMCP makes it easy to wrap the existing API endpoints as MCP tools
- Performance considerations: Our approach adds minimal overhead while leveraging existing infrastructure
- Start small: Begin with a few key endpoints before expanding to the entire API
- Health endpoints help: Adding a health check with
@mcp.custom_route("/health", methods=["GET"])
makes monitoring easier - Package conflicts happen: Keeping FastMCP out of our main image avoids dependency issues
- Future-proof design: This approach allows us to evolve our MCP implementation as the protocol matures
Thoughts on MCP
Cursor with Claude still seems to struggle with too many tools. We use Linear and Hubspot relentlessly so to make it work better we often have to turn MCPs on and off so it's not confusing. The instruction "check MCPs in function tools" seems to force Claude to take the MCP seriously. It's also funny watching the "thinking" claude guess what MCP stands for each time. But this is clearly a long term future for AIs working with our APIs. It's early in AI world but MCP seems to have won the protocol war!
Model Limitations with MCP
While building our MCP server, we discovered that not all AI models work seamlessly with MCP implementations. Notably, Gemini models have significant limitations when working with MCP servers.
The core issue is that Gemini only supports a select subset of the OpenAPI schema format and chokes on certain JSON schema properties that MCP servers commonly include. Specifically, Gemini throws errors when it encounters $schema
keys in the JSON schema returned by MCP servers.
This affects all Google models, not just Gemini 2.5, and can cause your MCP integration to fail completely with errors like:
Invalid JSON payload received. Unknown name "$schema" at 'tools.function_declarations[0].parameters': Cannot find field.
If you're planning to use your MCP server with Gemini models, you'll need to either:
- Stick with Claude (which works perfectly with standard MCP implementations)
- Use a modified MCP implementation that strips unsupported schema properties for Gemini
- Wait for the underlying libraries to handle this automatically (pydantic-ai has fixed this issue in recent versions)
This is just one example of how the MCP ecosystem is still maturing. Different AI models have varying levels of support for the full MCP specification, so it's worth testing your implementation across different models early in development.
Resources
- Model Context Protocol (MCP) – The core spec
- FastMCP GitHub Repo – Maybe most Pythonic MCP implementation
- Cursor – Always first call for MCP usage
- AWS ECS Task Definitions – Critical for our production setup
If you're thinking about adding MCP to your Django app, feel free to reach out.