If you’re subscribed to Hacking State you may recall that you were promised not just Poli-Phil and Cog-Sci, but also Compute, and boy do I have some compute for you.
About 3 weeks ago I got the insatiable bug to build out an MCP server for the Congress.gov API.
This small, civic-minded project unwittingly became the catalyst for rebooting my startup, and re-grounding myself in a craft and a calling on the heels of one of the most difficult and tumultuous periods of my life.
For those of you unfamiliar, Model Context Protocol (MCP), is a new open standard created by Anthropic that allows LLMs to talk to other computer systems. What this means is that any application or server adhering to this protocol can communicate with LLMs in a way they understand. Essentially, MCP opens the entire internet, as well as local file systems, databases, IoT devices, and anything else running a computer, to LLM-powered chatbots, AI agents, and even other MCP servers.
This means anything with an API, especially a public and free API, like the one for Congress.gov, now has the opportunity to live in your LLM client (i.e. Claude Desktop, Cursor, etc…) where you can interact with it in natural language.
Seeing this as a unique opportunity to 1.) learn more about a frontier technology; 2.) be the first to build something that’s never been built before; and 3.) release a genuinely useful public utility that’s on-brand and folds neatly into my then-dormant startup, LawgiverAI—I couldn’t pass it up.
This is the story of why I built CongressMCP, how I did it in three weeks, and everything that went wrong with its release
In November 2024, as my startup and personal life were on the brink of ruin, Anthropic released Model Context Protocol. For the past 6 months I had been stretching my executive capabilities and finances thin trying to make LawgiverAI work as a first-time solo founder. By this point I had spent months building and releasing an MVP, conducting customer interviews, and trying to solicit investment. I met with angels, pitched early stage funds, had an advisor who promised to make good introductions, and none of it went anywhere. The MVP had a glimmer of something useful, and it worked, but was nowhere near mature enough to make inroads in the B2B legislative tracking space. Moreover, my intimate relationship was starting to fail as I became increasingly stressed and irritable trying to save a sinking ship, promising that it was just a patch job in the hull, with water rising dangerously close to my neck. I was exhausted, overworked, in debt, disillusioned, and losing faith.
This confluence of conditions made me so myopic that when MCP came out, I couldn’t see it as anything other than an existential threat to my startup. The MVP I had built featured a chat-with-PDF interface, in-line citations, semantic search, and intelligent tagging with a backend using retrieval augmented generation (RAG) on a vector database of every congressional bill. At conception it was an application of the latest AI tools and prompt engineering in a novel space; but by the end of the year its core features had already become commonplace, and the paradigm of development I was in—static RAG without evals, agentic flows, or proprietary data, was already seeming outdated. Such is life in an AI startup. MCP, which offered the potential for instant access and reasoning on that very same data pipeline I had worked for weeks to build, looked like the death knell for my barely fledgling startup. I was out of money, out of time, and the stress left me out of ideas. So I stepped back.
A friend and mentor, Nick Cassimatis, generously offered me a job, part time, at his startup—he knew mine was failing and this would give me the opportunity to stabilize my finances a little, buy more time to find investment, and maybe decide to stay on if things didn’t work out. He threw me a lifeline, and for that, I’m extraordinarily grateful. Within days of accepting the offer, however, my long term girlfriend pulled the trigger and broke up with me. This devastating blow left me without the emotional bandwidth to creatively pivot or pitch my way out of the hole I was in. I could work and slowly rebuild my startup, but I couldn’t work, rebuild my startup, and grieve all at the same time. I stopped working on Lawgiver completely, took a full time offer from Nick a few months later, and thought maybe I’d try again some day with something else.
While I was working my interest in MCP began to grow, not because I had a good idea of what to do with it, but because the platform we were building, a no-code collaborative database that let you chat with anything and had some very secret sauce for working with AI and structured data through a unified system of ontologies, began to look threatened by it as well. I experimented with MCP on my own time and after several conversations with Nick and our product lead about the potential of MCP, its threat to our business, and what to do about it, we determined to co-opt it by integrating MCP totally into the platform. It worked. A week of dev time and some UI decisions later, we had a system whereby anything and everything on the platform could be talked to in Claude Desktop through MCP. It didn’t solve the platform’s other problems, and it didn’t exactly present an obvious use case, but it did stave off obsolescence and it gave myself and the team an opportunity to work on something new and exciting. Then, I was let go at the end of April with a couple months severance. I began formulating what I should do next.
After several weeks of grinding job applications, learning about new AI automation workflow tools, and trying my hand at freelancing, I noticed myself returning again and again to a desire to get deeper into AI. I had experience as an AI developer, but was not quite a machine learning engineer, and given the job market for devs, was beginning to worry that what skills I had were already outdated. Worse, I had taken a non-developer role at Nick’s startup, and had stopped coding for the most part while there. In the span of 5 months my skills had degraded. I was also concerned that the proliferation and success of AI automation workflow tools like Make.com and n8n meant that even coding itself was perhaps becoming passé. Vibe coding was another trend I didn’t see favorably, since it meant junior developers, designers, and PMs could quickly spin up prototypes and even full-fledged products, in some cases, without significant experience at all. How could I differentiate myself in a market with a collapsing middle, where all the jobs and money flowed to deep specialists and ML engineers, while the bottom became swamped with AI-empowered hungry vibe-coders vying for attention and opportunities?
The answer, as cliche as it sounds, was to follow my genuine interest. Through all this I still had an interest in political philosophy and governance, in how technology changes our relationship to power, each other, and ourselves. I had taken some of my newfound free time and doubled down for a period of weeks on my Youtube channel, which was going well, but not at the level where I could draw life-sustaining income before money ran out. After earning a pittance freelancing on Upwork, having a few AI automation consulting gigs fall through, and also finding I didn’t quite enjoy the attempt at pivoting into an agency model, I decided to face the music and return to what I know best: building things. I was going to return to coding, learn the new AI developer tools, and find something worthwhile to work on that would at the very least increase my chances of landing a job.
A few months back I had taken a half-hearted crack at a congressional MCP server one weekend night. I managed to get a very dysfunctional demo up in the span of 3 or 4 hours, but it barely worked at all. Shortly after leaving the company, I had spun up a comprehensive MCP server for the Podbean API, both to learn more about MCP, as well as to try to use it to maybe automate some of my post-production for the Hacking State podcast (there’s a video about it here). I resolved to return to my abandoned and half-baked project with the congressional API to see if I could make it useful.
Riding the Wave
The first thing I did was start experimenting with the new vibe coding IDEs to try to speed up development time. Much of the heavy lifting of turning an API into an MCP server involves translating very specific endpoint descriptions, parameters, and return values into function calls. This process is tedious, regular, and well-defined, a perfect job for robots instead of humans. For some reason I can’t remember why I decided to try first with Windsurf, which ended up being my coding companion for the first 2/3rds of the project.
Windsurf may be far less popular than Cursor, but its name is phenomenologically apt. After a few days of using it like a noob, I began to not only get an intuitive sense of its strengths and weaknesses, but also its optimal tempo and limits. Vibe coding is also aptly named, as the experience of working with these tools has an immersive quality all its own. Hours disappeared into days as I sat totally transfixed on the process of building with this tool. I learned tips and tricks for feeding it documentation and context, updating logs and progress reports, breaking tasks down into manageable chunks, when to have it search the internet or execute terminal commands. Each feature the Windsurf team had created: memories, rules and workflows, I discovered their purposes one by one. As the project grew larger and more complex, my facility and familiarity with Windsurf increased. I found myself spending hours and hours coding completely uninterrupted, often without even music. Its a feeling I could spend more words trying to describe, but the flow states induced by these tools at a certain level of optimal project complexity are real and ineffable. I continued using Windsurf as CongressMCP grew until the problems I needed to solve started to become too complex for it to reliably improve my productivity.
The mere translation of API endpoints from Congress.gov had, over the course of a little more than a week, ballooned into a fully-fledged remote MCP server with user authentication, API key management, Stripe integration, a Supabase database connection, rate limiting and role-based middleware, an asynchronous server gateway interface, deployment scripts and more. I learned about MCP transport layers, and how Claude Desktop uses stdio, but the remote servers use http/sse. To deploy remotely, but get it working on Claude Desktop, I had to build and publish a bridge package on NPM that would translate between the two. There was also a frontend to deal with as well. Finally, when the congressional API endpoints were all working and the server was deployed and functioning, I realized that naive translation of the congressional API into tools and resources for MCP meant creating over 90 tools, an unwieldy amount for LLMs to know what to do with. Moreover, most MCP clients allow somewhere between 50 and 100 tools at a time to be registered, meaning they’d crowd out any other MCP server, or not even fit at all. I had worked furiously to produce tons of code with Windsurf, but now optimizing my mature project required going back in and reducing tons of complexity I had created to streamline functions and make it production-ready.
Cline, Claude Code, and Cleaning Up
While I continued using Windsurf into the consolidation process, I began to get curious about alternatives. Cline promised to have a superior, though pricier, coding agent that would habitually take in more context, something I sorely needed at this level of complexity. I was in the process of turning 91 congressional tools into just 6 comprehensive legislative toolsets, and wanted all the intelligence and context management I could get. I had even started using Google’s Gemini 2.5 for its 2 million token context window (though I eventually found Claude 4.0 sonnet to be better.) Cline did a good job of ingesting massive chunks of the codebase and reasoning quite well about them. I made good progress on some tough problems using it. Cline’s greatest downfall is that its bias to consume massive amounts of context, makes it extremely expensive to use. Even in comparison to burning through Claude 4.0 credits, Cline was costing me a fortune. As a developer on a fixed budget with no idea where or when I’d be getting my next paycheck after my last, this was not a cost I was willing to absorb. After a few days on Cline, I decided to finally take the leap and get into the intimidating and deceptively simple, non-IDE intelligent CLI solution from my favorite AI producer, Claude Code.
Claude Code is imperfect, but I haven’t felt compelled to return to Cline or Windsurf since. I spent the last week or so prior to, and after, launch working in Claude Code inside regular VSCode and found it suitable, reasonably priced, and maybe this is my imagination, but it seems to work better with Anthropic models than the others. I hate that it signatures my Github commits, and it does bug out from time to time, but it gets the job done. I was able to get my server deployed, working, and reasoning across four repos simultaneously without fail. I was so used to using Claude Code that by the time release was rolling up, I had began to use it for not only code, but also conversation about business strategy, tactics, pricing, and marketing. I’m told nothing gets sent back to Anthropic’s servers, I hope that’s true.
The Botched Launch
If you’ve made it this far and didn’t get lost in my personal trials or the technical exposition, congrats, you’ve made it to the juicy part. The night before my scheduled launch on ProductHunt I had everything ready. The server was done and functional both hosted and locally. The frontend looked great. I had lists of MCP registries to submit to, copy for LinkedIn, X, and Reddit, even a comedic demo video I had spent several hours splicing together. All tests were passing.
Then, I decided to do something drastic and impulsive: I open sourced the server. This was not something I had planned or expected. My intention was to deploy it as a paid SaaS product with a free tier with limited capabilities, and paid tiers for premium features. However, as I began looking into MCP registries, I saw that the overwhelming majority of non-proprietary MCP servers belonged to public repositories. The few that didn’t, generally had a large SaaS or infrastructure platform behind them. Some MCP registries were explicitly for open source servers. I was aware that a batteries-included paid MCP server as I had built was something of an experimental model. Most MCPs are a feature of existing products that serve to increase distribution or improve workflows for existing users, not be standalone products. Additionally, the one that I had built was around a very public and very free government API, so how could I justify charging? The more I thought about it and the more I considered the feedback I’d likely receive from posting something like this as a closed-source project, the more I began to question whether I had made the right decision in not opening up the code. In a conversation with Claude Opus 4, he made it clear in no uncertain terms that the best course of action, given the circumstance, would probably be to open the source code, use the launch as a way to get distribution and raise awareness about Lawgiver, and publish it under a limited commercial license that allowed anyone to self-host, but prohibited direct competition. I concluded that this made sense, so with my ProductHunt launch scheduled to go at 12:00AM PST on Tuesday night, at 9PM on Monday, I began converting the server and its pricing, tiers, and access to an open source model.
This was a hasty surgical procedure, but not impossible under the circumstances. In fact, it would’ve gone off without much disruption had another major incident not unexpectedly coincided with my overhaul. Shortly after the ProductHunt launch started, Heroku (my backend host) went down for what would turn out to be nearly 14 hours. This alone wouldn’t have necessarily been debilitating for the launch, except that my switch to open source had revealed a bunch of exposed API keys in an early file that at some point had been committed and removed. Though it is technically a security vulnerability I should’ve dealt with, I thought the codebase would never see the light of day and hadn’t been too concerned. At that point, I had actually forgotten it had ever happened. I looked into erasing the commit history, but given the repo had been exposed for a number of minutes since becoming aware of this issue, there was an infinitesimal risk some important API keys had been leaked. I had no choice but to reset them all. After doing this, I discovered Heroku was down when I went to replace the environment configuration on my deployment. The result of this was that nothing worked. All the old keys for email, Stripe registration and payments, updating my database, querying Congress, were now expired, and I was locked out of my deployment on Heroku, unable to update them.
The next 12-14 hours I spent helplessly watching as the ProductHunt launch I had intended to drive social media traffic to sat and withered on the vine. Without any way for users to sign up, use the product, or even receive emails, I waited for Heroku to get their systems back up; hoping I’d be able to salvage the launch or that maybe the ProductHunt gods would smile upon me regardless, and it would do well anyway. No such luck. It was late in the afternoon on Tuesday when Heroku finally got their systems back online. I went in and changed the keys, but by then my ranking had fallen. I continued on Wednesday with my release, and pushed my product out to Reddit, a bunch of MCP registries, X, Hackernews, and LinkedIn; but did not drive traffic to ProductHunt, as the chance for a big push was over. It garnered some attention, and did quite well in at least one subreddit, but had an anticlimactic reception overall.
I’m proud that I put it out. A congressional MCP server is a genuine piece of civic infrastructure, and as far as I can tell, I am the first person to publish a complete and comprehensive MCP server for the congress.gov API. It taught me a lot about MCP, how to collapse a large unwieldy API into a small set of tools, what to think about when building, deploying, and trying to monetize a SaaS product, how to do launches and what can go wrong, and what I’m willing to work on day and night with uncertain hope for reward. Best of all, it reinvigorated my dormant startup.
LawgiverAI is now officially an AI-native legislative and regulatory compliance infrastructure company. CongressMCP is our first product, and a proof of concept for a suite of AI-first tools and agents that promise to transform how we interface with, and analyze public policy. This project re-lit the fire in me for the future of AI, and gave me the confidence to reboot after a grueling hiatus.
I’m positioning to raise a round with a more refined and focus thesis, and a better idea of where Lawgiver is headed. I’m still not out of the woods yet, and have a very short runway, but I’m willing to do what’s necessary to keep it going. If you, dear reader, or anyone you know are looking to:
Invest in an AI-native infrastructure for governance play
Work with Lawgiver in a B2B context for regulatory or legislative tracking
Use, distribute, or extend CongressMCP in some way
Hire, contract, or consult me on MCP, RAG, or AI agents
Please don’t hesitate to reach out to: alex@lawgiver.ai or @distantpathos on X.
In the meantime, I’ll keep exploring the possibility space of technology, humanity, and governance; speaking, writing, and podcasting to you all about it.
Thank you for reading,
Murshak
Share this post