AI
min read
Last update on

From CMS to AI middleware: how I connected Drupal's content to a custom MCP server

From CMS to AI middleware: how I connected Drupal's content to a custom MCP server
Table of contents

MCP, or Model Context Protocol, is gaining serious traction as organizations race to improve the performance and responsiveness of AI agents. Behind the scenes, many are building their own MCP servers, custom middleware systems designed to optimize how context is managed between applications and language models. But what exactly is an MCP server, and how does it work?

Before jumping into the technical setup, it’s worth understanding the role of an MCP server. Think of it as an intelligent intermediary. Instead of sending raw prompts directly to a language model, an MCP server receives input from the client, enriches it with relevant context, and then passes it on to the model. It also processes the model’s output before returning it to the client. 

This setup enables more advanced context handling than a direct API call, especially when working with multi-step workflows or agent-based systems.

Originally introduced as a standard protocol for bridging clients and language models, MCP now represents a crucial layer for anyone building scalable, high-performance AI systems.

Let’s walk through what it takes to build one from scratch.

Application:

A Drupal website that contains information about Properties would have fields like Address, Price, Booking Availability Date, Amenities, etc. We would expose all this content through JSON: API and then create an MCP server based on this API.

Later, we would connect our MCP server with a client application (for our case, we would be using VS Code Insider, but you can use any, like Cursor IDE or Streamlit app) and would ask questions in the chat based on properties. It should use our MCP tools and provide answers.

Implementation:

Let's say we have our exposed JSON: API from Drupal based on it we would start creating our MCP server.

Here are some common steps we would be doing to create an MCP Server

Set up your Python environment

  • Install uv and set up our Python project and environment:
curl -LsSf <https://astral.sh/uv/install.sh> | sh
  • Restart your terminal afterwards to ensure that the uv command gets picked up.
  • Add this to the terminal at your required Project location:
# Create a new directory for our project
  uv init appname
  cd appname
  # Create virtual environment and activate it
  uv venv
  source .venv/bin/activate
  # Install dependencies
  uv add "mcp[cli]" httpx
  # Create our server file
  touch appname.py


Import packages and MCP initialization

  • Add this to the server file.
from typing import Any
   import httpx
   from mcp.server.fastmcp import FastMCP
   # Initialize FastMCP server
   mcp = FastMCP("airbnb")
   # Constants
   PROPERTY_API = "<https://airbnb.ddev.site/>"

  

  • Firstly, import all required packages.
  • Initialize MCP with FastMCP
  • Create a constant Variable and add the Drupal website URL.

Helper functions

async def make_website_request(url: str) -> dict[str, Any] | None:
   """Make a request to the Our Local site with proper error handling."""
   async with httpx.AsyncClient(verify=False, follow_redirects=True) as client:
       try:
           response = await client.get(url)
           response.raise_for_status()
           data = response.json()
           return data.get('data', [])
       except httpx.RequestError as e:
           print(f"An error occurred: {e}")
           return None


def format_property_data(data: dict[str, Any], location: str = None, max_price: float = None) -> str:
   """Format property data into a readable string."""
   if not data:
       return "No property data available."
     properties = []
   for property in data:
       attributes = property.get("attributes", {})
       name = attributes.get("title")
       body = attributes.get("body")['value']
       address = attributes.get("field_address")
       availability_from = attributes.get("field_avaialbility_date")
       availability_to = attributes.get("field_availability_date_to")
       number_of_guests = attributes.get("field_number_of_guest")
       price_per_night = float(attributes.get("field_price_per_night"))
       # Filter by location if specified
       if location and location.lower() not in address.lower():
           continue
       # Filter by price if specified
       if max_price and price_per_night > max_price:
           continue
       properties.append(f"This is the Property Name: {name}\\n"
                         f"Description: {body}\\n"
                         f"Address: {address}\\n"
                         f"Availability From: {availability_from}\\n"
                         f"Availability To: {availability_to}\\n"
                         f"Number of Guests: {number_of_guests}\\n"
                         f"Price per Night: {price_per_night}\\n")
   if not properties:
       message = "No properties found"
       if location:
           message += f" in {location}"
       if max_price:
           message += f" under price of {max_price}"
       return message + "."   
   properties.append("\\n")
   properties.append("This is the Property Listing")
   properties.append("\\n")
   return "\\n".join(properties)

        

  • I have created helper functions. The first one make_website_request (), is a function that calls an API request using httpx and fetches data.
  • The second helper function format_property_data(), is a format for how our data should be formatted and shown to and through the LLM.

MCP tools

@mcp.tool()
   async def get_property_data():
       """Get all property data """
       url = f"{PROPERTY_API}/jsonapi/node/property_listing"
       data = await make_website_request(url)
       return format_property_data(data)
   @mcp.tool()
   async def get_properties_by_location(location: str):
       """Get property data filtered by location"""
       url = f"{PROPERTY_API}/jsonapi/node/property_listing"
       data = await make_website_request(url)
       return format_property_data(data, location)
   @mcp.tool()
   async def get_properties_by_price_and_location(location: str, max_price: float):
       """Get property data filtered by location and maximum price"""
       url = f"{PROPERTY_API}/jsonapi/node/property_listing"
       data = await make_website_request(url)
       return format_property_data(data, location, max_price)
   @mcp.tool()
   async def get_properties_by_date_and_location(location: str, after_date: str):
       """Get property data filtered by location and availability date"""
       url = f"{PROPERTY_API}/jsonapi/node/property_listing"
       data = await make_website_request(url)
      
       # Filter data by availability date before passing to format_property_data
       if data:
           filtered_data = [
               property for property in data
               if property.get("attributes", {}).get("field_avaialbility_date") <= after_date
           ]
       else:
           filtered_data = None   
       return format_property_data(filtered_data, location)

   

  • These are the MCP tools I have defined. The first one get_property_data() is to get all the properties that are on the site.
  • The second one get_properties_by_price_and_location() is to get property data based on the location asked in the query
MCP tools
  • The Third get_properties_by_price_and_location() is to get a project based on a location with a maximum price range.
MCP tools
  • The Next one is get_properties_by_date_and_location() which would search the property based on the availability.
MCP tools
  • Finally, initialize and run the server.
if __name__ == "__main__":
       # Initialize and run the server
       mcp.run(transport='stdio')

 

If you see the above examples, you will see the MCP tools are basically just receiving data from the Drupal Content, after that our Agent and LLM manage, filter, and handle the user request from the chat on their own. That’s what LLMs can provide.

Connect the MCP server to VS code insider and use it with chat

  1. First, go to settings in VS Code and search for “settings.json”.
  2. Add your MCP server to the settings.json like this.
{
       "mcp": {
           "inputs": [],
           "servers": {
               "airbnb": {
                   "command": "uv",
                   "args": [
                   "--directory",
                   "/home/vighnesh/Projects/AI/airbnb",
                   "run",
                   "airbnb.py"
                   ],
               }
           }
       }
   }

Once this is done you would see “Start/Stop/Restart”.

Just like this, we would be able to chat with the Agent who would answer by using the tools created by our custom MCP server.

The full code can be found here.

Wrapping up

The real momentum behind AI right now is about smarter context. 

And that’s where MCP comes in. As more teams move from basic prompt-response setups to real use cases that involve multiple systems, documents, APIs, and workflows, there’s a growing need for something that can handle context properly. Not just pass it along, but manage it—filter it, format it, structure it, and make it useful to a language model. That’s exactly what MCP was built for.

Industry-wide, this shift is becoming clear. McKinsey’s 2024 AI report found that 72% of businesses have embedded AI in at least one function, tools like MCP are filling that gap by standardizing how systems talk to agents and how agents talk to models.

We’re already seeing real adoption. Developers are wiring up MCP servers to platforms like VS Code and Cursor, using them to bring live context into legal research tools, travel apps, real estate portals, and customer service agents. 

Companies in the U.S., Germany, India, and Singapore are exploring MCP for everything from enterprise search to healthcare automation. The use cases are getting sharper—and the tooling is catching up.

Drupal is one example in this mix. With structured content already in place, it becomes a strong candidate for feeding data into an MCP layer. But the real story is bigger. This is about any system that holds structured knowledge becoming part of the agent stack—connected through a shared protocol that knows how to work with context.

Looking ahead, the expectation is that MCP (or something very close to it) becomes the default. Just like REST or GraphQL became the standard for web APIs, MCP is shaping up to be the standard for context APIs. 

If you’re working on anything involving agents, internal data, or real-time context, this is worth paying attention to. You don’t want to duct tape this stuff together. You want a protocol that does the heavy lifting.

Want to see what this looks like in action? Check out:

References: https://modelcontextprotocol.io/quickstart/server

Written by
Editor
Ananya Rakhecha
Tech Advocate