AI is everywhere in tech now. It is in the tools we use, the products we build, and the way we interact with users. It feels like if you aren’t with AI, you are behind. While there’s plenty of questionable AI use, one thing is for sure: AI is making waves. And it’s really hard to keep up with the latest.
One of the concepts that are gaining lots of discussion is the “MCP” which stands for Model Context Protocol. Anytime I’d ask what an MCP is, I’d usually hear it described as an API. So then, why isn’t it just called an API? Because it’s not really an API. Sound confusing? You bet!
Just like how I learned to code from actually building something. I decided if I was going to truly learn what this thing was, I’d have to build one. So I did and this is how that went.
If this sounds familiar to you, I did speak about this on Wireframe on Monday. You can think of this as the accompanying blog post. If you prefer listening to what I went through to make this, go ahead and click the link.
You have too much stuff
Some of the feedback that I get from folks who follow my work is that I do too many things. It’s hard to keep track of my latest research findings or development projects. Even my wife who works in the same field and lives under the same roof has trouble keeping up with my work. On top of this, much of my work is very abstract and hypothetical. It makes it hard to fully apply my work into real world scenarios without understanding all of the research behind it. Not to mention all the other factors that could contribute to friction in the implementation.
So what if there was one place to learn about my work conveyed in a approachable way? Some way that you could learn about my work within the context of your own work? That’s what I set out to build, my own MCP server that aggregates my work and research findings into a single place.
To do this, I needed two major units of work. I needed an MCP server to connect with MCP clients and I needed the server to fetch data from my domains in a way helpful for these systems to comphrehend.
End-domain data
First, I wanted each of my end-domains: my personal site, the blog, and my design system to expose their data in a structured way. It was important to me that I’d follow some acceptabled standards for exposing this data. My thought was that I wanted these sites to opt-in for AI discovery.
The way I found to do this is with using a llms.txt file which has an emerging standard. This file is like a sitemap but for AI. The content is actually markdown. Here’s a sample:
# Title
> Optional description goes here
Optional details go here
## Section name
- [Link title](https://link_url): Optional link details
## Optional
- [Link title](https://link_url)
Importantly, the links found in the file that go to content should also be markdown files. That means my site needed to serve not just the HTML pages, but also the Markdown source files. This way, the MCP server could fetch the raw content and parse it for AI consumption.
For my personal site and the blog, this was fairly straightfoward because they are both written in Markdown and rendering that as raw text is a few lines of code for a [slug].md.js file:
import { getCollection } from 'astro:content';
export async function getStaticPaths() {
const entries = await getCollection('posts');
return entries.map((entry) => {
return {
params: { slug: entry.id },
props: { entry },
}
});
}
export async function GET({ props }) {
return new Response(props.entry.body, {
status: 200,
headers: { 'Content-Type': 'text/plain; charset=utf-8' }
});
}
The real problem was Storybook. Since the source for the files there are written in stories.tsx files, it’s not a simple 1:1 translation. Admittedly, I vibe-coded the solution to this and wound up with nearly 1000 lines of code. Luckily, my system is open-source so you can grab this and remix it how you like for your own purposes.
The script does a few things:
- Finds all the
stories.tsxfiles in thesrc/componentsdirectory. - Parses the file to extract JSDoc, component category and name, and stories.
- Determines the props from the imported component type definition.
- Renders component stories as JSX instead of the less useful Story source.
- Generates a Markdown file for each component and a
llms.txtfile with all of the components.
With this in place, all of my end-domains are now exposing their data in a structured way for AI discovery and consumption.
The MCP server
My MCP server is a very simple Netlify project. It was heavily inspired by this guide on Netlify. Much of the scaffolding is the same except registering tools and resources which is the main difference.
Resources & Tools
When I went into this project, I was under the impression that everything I was going to offer from this server would have been resources. Resources are basically static files that the server can provide. It’s like requesting a library book. You receive the book with no modifications to the content, just as it is.
For resources there’s two actions that clients need to accept: list resources and get resource. List resources is telling the client what is available, and get resource is fetching the content of a resource. This maps 1:1 to what was created at the end-domains earlier. The llms.txt is the list of resources, while each of the Markdown files is a get. The slightly annoying thing is that we need to transform the llms.txt Markdown into structured data. While I made my own using the Markdown AST tools, there seems to be a newly created package that might help in the parsing. Though I take pause to it not having a link to the source to know what it is doing.
Once you have it parsed, you can register each as a resource. For my MCP, I wanted the resources to have the following URI structure:
subdomain://category/slug
The MCP specification allow you to make up your own URI scheme and my sites generally follow this shallow pattern. This is a resource URI meant for the server to use internally to look up its resources. Then you can use the official MCP Typescript SDK to register the resource template assuming you have LIST_RESOURCES_META to configure the resource and a function listResources() to handle the fetching:
server.registerResource(
'resources',
new ResourceTemplate('{subdomain}://{category}/{slug}', { list: undefined }),
LIST_RESOURCES_META,
listResources
);
The thing that bothered me is that { list: undefined } setting. It was unclear what the purpose of that was, and after some vibe-coding sessions, Claude suggested I use the server.setRequestHandler() method to define resources. So, if you’re building something similar, you might want that method over the helper methods found prominently in the README.md for the SDK. Especially since my resources are dynamic; they don’t truly exist on the server.
A gotcha here is that some MCP clients, systems that speak to MCP servers, don’t support reading resources. They need a tool to access the resources. So, that required that I include tools. Tools is fancy word for something that can process data. Generally we’d think of something like a POST request for an API. In this case, in order for some clients to access the resources, I needed to create a pathway for them to make the request. For my system, it was recommended to use the server.setRequestHandler() again, but as per the README.md, registering a tool is done like this, assuming you have a GET_RESOURCE_META to configure the tool and a function getResource() to perform the lookup:
server.registerTool(
'get_resource',
GET_RESOURCE_META,
getResource
);
STDIO vs HTTPS
Another thing that was being thrown around as common knowledge was MCP servers and clients having better support for STDIO over HTTPS. Because I program primarily on the web, I didn’t know what STDIO was. I just thought it was some other way of making the same server. In reality, STDIO is a way of identifying a MCP server than is meant to be installed locally on a persons computer. The HTTPS MCP server is a more recent development, which explains why I found it really hard to develop and test. My way of testing was just pushing the server up and hitting it in Cursor to see if I could connect and have it respond. Definitely not scalable, and I hope there’s better tools out there that can help in this area. The Netlify article speaks about making a hybrid server, and there’s something called MCP Bundles that could be cool to try. Though I prefer the HTTP version. I find hitting a URL much easier than downloading and installing something.
What now?
The server is located at mcp.damato.design and the homepage has instructions on how you can add it to your MCP client of choice. This helps show my aptitude with AI technologies, while providing access to my research in a contextual way. It was filled with areas of my own learning and seeing all of the gotchas and missing standards shows how much people are still trying to figure all of this out. It was effectively a weekend project, and something you might want to consider trying to keep your skills in the game. Happy vibing?