The 10 laws of developer experience for content management systems

First published · Knut Melvær ·

These ten laws are a checklist for CMS developer experience in a world where humans and agents both need to build on top of your content system.

Contents

It's never been less hard to build a CMS. A weekend with an AI coding assistant and enough persistence will get you a basic content management system. You can create documents, edit fields, maybe even preview them on a frontend.

I keep seeing new ones pop up. Not just the recent WordPressesque EmDash launch by Cloudflare; if you go to Reddit’s, you’ll see that every other week someone announces a smol CMS that sits on top of some markdown files aside a web framework or sticks content into the database-service-in-vogue.

And what I’m seeing in the conversations around these, including WP Matt’s… feedback to EmDash, is that most devs are an obsession with where you can host the CMS. While I totally get the concern, worries about lock-in and wanting to deploy something into infra that you like to work with, I think the focus explains why a lot of these CMSes[1] don’t scale well.

I’m not talking WordPress running on a Raspberry Pi not tackling inbound traffic type of scale, but scale in terms of the complexity of your content domain and the teams who are supposed to be productive with it.

A lot of CMSes, including well-established ones, starts to crackle and crumble when you throw complexity at them. When a second editor joins, when a third team needs the same content in a different format, when an AI agent needs to operate the system on behalf of someone who's never seen the admin UI. A CMS that works for one developer editing content alone is a database with a form. A CMS that works for a team, across channels, with agents in the loop, is a different kind of system entirely.

These things is what I immediately see whenever I give a CMS a spin. And I thought I should write them down as a framework for how you should approach these systems, both in evaluating and making them. I’m starting with the laws for developer experience. There are different laws for editors and for agents. Those are separate posts. (Yes, I'm committing to writing them. Hold me to it.)

A declaration of bias

So before diving in I need to address the cloud-sized elephant in the room. I work for Sanity, who has been making a content operating system for the last almost 10 years. You can read these rules as self-serving, because Sanity hits most of these (still plenty of room for improvement though).

But having a bias doesn’t mean you’re wrong. I’d argue, that my work at Sanity has given me plenty of real-life experience to talk with some authority on these things. And hitting a lot of these points where the point I started using Sanity in the first place, and eventually started to work here.

I have seen these patterns play out in practice for two decades. I've been shipping WordPress sites since 2005, worked with a wide array of CMSes as a consultant in a lot of different sectors, and over the last eight years at Sanity, I've seen implementations ranging from single-developer side projects to Fortune 50 enterprises.

The laws I keep seeing come into play are the same ones regardless of scale. They just hurt more as the team or content complexity grows.

I. Your content model belongs in code

Content models defined in code can be version controlled, reviewed in pull requests, tested locally, and generated by AI coding assistants. Content models defined in a UI can't.

This matters more now than it did five years ago. Sure, AI agents can technically click through admin interfaces (computer use is a thing). But it's slow, brittle, and burns tokens on pixel-parsing instead of actual modeling work. When your schema lives in code, content modeling becomes a development workflow: describe what you want, the AI writes the schema, you test it locally, you open a PR. When it lives in a database behind a UI, the agent spends its time navigating forms instead of solving your content modeling problem. Same outcome, ten times the cost.

If you can git diff your content model and an AI assistant can generate a working schema you can test locally before it touches production, you're in good shape. If your content model lives somewhere you can't version or diff, you've got a problem that only gets worse with scale.

II. "Page" is a content type, not an architecture

The page-centric content model is so sticky, which is understandable, but it’s the main reason folks get stuck in their CMS down the line. A CMS that organizes content around "pages" and "posts" bakes a website assumption into the data layer. That assumption breaks the moment content needs to go somewhere without a URL. Heck, it even starts to break down when you want to reuse content within a website.

Content flows to websites, apps, APIs, email, voice interfaces, and destinations that don't exist yet. And increasingly, it flows to agent contexts: an AI assistant that needs your product specs to answer a customer question, a translation agent that needs the raw content without layout markup, a personalization agent that needs to assemble content from multiple sources into a single response. None of these consumers think in "pages." A hotel description isn't a "page." A product with three pricing tiers and regional variants isn't a "post." If your CMS makes these feel like awkward afterthoughts, the architecture is page-centric. You've coupled your content to one presentation context and now every other context requires workarounds.

Try modeling content that has no URL or route. Products with variants. Configuration objects. If these feel like hacks in your CMS, the architecture is page-centric and you'll be fighting it on every project that isn't a blog.

And that takes us to the next law.

III. Your content model should be as expressive as your domain

If you're flattening your business domain to fit the CMS, the CMS is the bottleneck.

A CMS that ships with text fields, number fields, image fields, and maybe a "JSON string" field covers the basics. But real domains are messier than that. A product with variants, each with its own pricing tiers and availability rules. An event with sessions, each with multiple speakers who are also authors on your blog. A landing page with a flexible layout where marketing can mix hero blocks, testimonial carousels, and pricing tables in any order. These aren't edge cases. They're "someone's just trying to ship on a Tuesday" jobs-to-be-done.

This goes for rich text too. Rich text stored as HTML couples content to a presentation format. Sure, agents can read HTML. Humans can read assembly language (well… some). That doesn't make it the right abstraction. Can you query which documents contain a specific block type? Can you validate that every "callout" block has both a title and a body? Can a translation service process it field by field without accidentally translating your CSS class names? (Yes, that happens.) Structured rich text (typed JSON blocks, like Portable Text) makes content queryable, "validatable," and transformable at the field level. HTML makes all of these harder than they need to be.

What about Markdown?

Here's the thing though. There's a real tension with how AI agents work today. Agents are great at reading and writing markdown. It's their native I/O format. So the tempting conclusion is: just store everything as markdown. But markdown as an I/O format and markdown as a storage format are different things. Agents can use serialization libraries (like @portabletext/markdown) to convert between structured formats and markdown on the fly. Your storage layer should be optimized for querying, validation, and multi-channel delivery, not for what's convenient for one type of consumer. Let the agent translate at the edges.

When the content model can't express the domain, developers build workarounds: JSON blobs in text fields, naming conventions to fake relationships, HTML strings where structured blocks should be. Every workaround is technical debt that compounds. And flat models limit what AI can do with your content. A schema-aware AI that understands field types, relationships, and validation rules can do meaningful work. An AI looking at a bag of string fields can only guess.

Try modeling a product with variants, each with pricing tiers, or a page with a flexible block layout. Then try querying "all documents where the body contains a code block." If you're reaching for workarounds at any of these steps, the content model is too shallow for the domain it's supposed to serve.

IV. If you can't do it with the API, it's not really a feature

An API is only as good as what you can do with it programmatically. If core operations (create, validate, publish, set references, bulk update) require a browser session, your API is incomplete.

This was tolerable when only humans used the CMS. It's not tolerable when AI agents, MCP servers, CI/CD pipelines, and automation workflows all need programmatic access. Every feature locked behind a UI is a feature that can't be automated, can't be tested in CI, and can't be operated by an agent.

I’m often surprised when I see the limitations that headless CMS vendors put on their APIs outside of the content delivery. Rate-limits that makes it practically hard and painfully slow to do anything meaningful in terms of bulk operations, or simply locking functionality to the UI not making the underlying APIs available for their customers.

You also see this in the "bring your own AI" pattern: a thin wrapper around OpenAI where you bring your own API token, then figure out how to make the AI service work with your content. If the CMS had a complete API, the integration would be straightforward. The wrapper exists because the API doesn't.

Picture an AI agent with no prior knowledge of your CMS trying to create a document, update its content, validate it, and publish it, all programmatically, without a browser. If that workflow hits a wall anywhere, your API has gaps that will show up in every integration you build from here on out.

V. You need a query language, not just endpoints

REST endpoints for basic CRUD aren't enough for content retrieval, and they're barely enough for updates. Especially when you move beyond the page-constrained content model. The moment you need to filter documents by type, project specific fields, resolve references, sort by date, or paginate results, you need a query language. And the moment you need to update a nested field three levels deep without replacing the entire document, you need a patch language too.

Without a query language, every query pattern requires a different endpoint or a different piece of custom code. Sure, an agent can write that code cheaply now. But every bespoke endpoint is a separate thing the next agent (or developer, or integration) has to discover and learn. A query language gives every consumer of your content, whether it's a frontend, an agent, or a third-party integration, the same composable interface. "All articles by this author" and "all articles tagged 'architecture'" are two filters on the same query, not two different code paths.

A query language also lets agents explore your content without knowing the schema upfront. "What types exist? What fields do they have? Show me the five most recent documents of this type." REST endpoints only answer questions you've pre-built answers for. A query language answers questions you haven't thought of yet.

GraphQL, GROQheck, even SQL,or something else entirely. The specific language matters less than having one.

You should be able to sit down (or hand it to an agent) and do a single query that filters by type, resolves references[2] two levels deep, and projects only the fields needed, without writing backend code or hunting for a custom endpoint. If that's not possible, every new consumer of your content starts from scratch.

VI. Every system output is agentic developer experience

Error messages, CLI output, validation feedback, API responses, log lines. Every piece of text your system produces is a UX interaction. And in the age of agentic development, it's also an instruction to the agent working on behalf of your user.

There's a standard for API errors: RFC 9457 ("Problem Details for HTTP APIs"), around since 2016. A type URI for stable identification, a detail field for this specific occurrence, extension members for context. Most APIs still don't implement it.

But it's not just errors. What does your CLI print when a developer runs --help? What does your dev server log on startup? What does your validation return when a required field is missing? Every one of these surfaces is a place where your system can guide both the developer and the agent. An agent hitting 400 Bad Request with no body will retry, hallucinate a fix, or give up. An agent hitting a structured error with available options and a documentation link will self-correct. Same failure, wildly different outcome. I wrote about this in more detail in my post on agentic developer experience.

Intentionally break something and see what happens. Does the output tell you how to fix it, and does it give enough context for an AI assistant to help debug? Then check your --help text and your startup logs while you're at it, because those are just as much a part of the developer experience as the error messages.

VII. Your content needs a history

Who changed what, when, and why. Rollback. Diff between versions. Audit trails. These aren't premium features. They're the foundation of trust between your CMS and everyone who uses it.

Content versioning matters for everyone because it's the safety net for every other operation. Schema migration went wrong? Roll back the content. Bulk update hit the wrong documents? Restore from history. Agent made a bad edit that made it to prod? See exactly what it changed and undo it. Without versioning, every destructive operation is permanent, and every automation is a risk.

Git ain’t it

"But I have markdown files in git." Sure, and git gives you file-level history. But content versioning is a different problem. Rolling back one field on one document while everything else stays live. Seeing which agent changed which field at 3pm. Branching content for a scheduled release without affecting what's published now. These are content operations, not file operations. And in practice, agents struggle to keep content consistent across scattered markdown files in ways that structured, schema-aware storage simply doesn't allow.

This is where vibe-coded CMSes show their broken seams the fastest. Versioning is boring infrastructure work. It doesn't show up in the demo. But the first time someone accidentally publishes a draft, or an agent overwrites a field it shouldn't have touched, or a migration corrupts 200 documents, you'll wish you'd built it first.

Make an edit and see if you can tell what changed, who changed it, and whether you can roll back to the previous version without losing anything else. Then try doing that programmatically, not just through the UI. If you can't, your versioning is a feature checkbox, not an actual safety net.

VIII. Content operations need to work at scale

Retagging 5,000 articles. Translating 200 documents. Migrating between schema versions. Backfilling a new required field across your entire content library. These are the operations that reveal the truth about your architecture.

If single-document operations work but thousand-document operations hit timeouts, rate limits, or data corruption, the system was designed for manual editing. That was fine when content operations were a human clicking through documents one at a time. It's not fine when an AI agent is processing your entire library.

AI makes bulk operations the default workflow, not the exception. A translation agent processing every document in your CMS. A metadata agent backfilling tags based on content analysis. A migration agent restructuring schemas across thousands of documents. These need to work as reliably as single-document edits, and they need to work while humans keep editing. If your bulk operations lock the system or require downtime, you've built a bottleneck into your infrastructure.

Try updating a single field across 1,000 documents and see what happens. If it works, check whether editors can keep working while the operation runs. If it times out, or if you have to take the system offline to do it, the architecture was designed for one person clicking through documents, not for the workflows that are quickly becoming the norm.

IX. Type safety from content model to component

Your CMS should generate types from the content model. The developer's IDE should know that a "blogPost" has a "title" (string), an "author" (reference to "person"), and a "body" (portable text). Not any. Not Record<string, unknown>. Actual types.

This applies to query results too, not just document shapes. When you write a query that projects three fields and resolves a reference, the return type should reflect exactly that: those three fields, with the reference resolved to its actual type. If your query language and your type system don't talk to each other, you're back to casting everything to any and hoping for the best.

Without codegen, every content access is a runtime gamble. A field gets renamed in the CMS, and the frontend breaks silently. A required field becomes optional, and the component crashes on null. These bugs are invisible until production and tedious to debug. (Ask me how I know.)

Type generation from the content model closes the loop: the schema is the source of truth, the types are derived, and the compiler catches mismatches before they ship. This is table stakes for any typed language ecosystem. Your CMS should participate in it.

Try renaming a field in your content model and see if your IDE lights up with errors in the components that reference it. Then write a query that projects three fields and check whether the result type reflects exactly those three fields, before you deploy. If the answer to either is no, you're flying blind between your content layer and your frontend.

X. Your content should be more portable than your platform

Your content will hopefully outlive your current tech stack. It will outlive your current framework, your current hosting provider, and probably your current CMS. The question is whether it can leave cleanly.

I can’t fathom (well, maybe a little bit) how much money has been billed on just the moving content from one system to another throughout the years. Very often, it was way less hard to just scrape the website, than get it out of the underlying DB.

Yes, this is one of the things that is now more approachable with AI, but storing your content in presentation formats like HTML is still a waste of time and tokens, and it makes the implementation in modern web frameworks weird and limited.

Po(r)table content in proprietary pots

This is about content portability, not platform portability. A hosted CMS can be perfectly fine. The question is whether your content is locked into a format, a schema representation, or a query pattern that only works with that one system. Can you export your content as standard JSON? Can you access it through standard protocols? Can you switch frontends without migrating content? Can you add a mobile app without duplicating your content layer?

The flip side matters too: can you bring content in? If migrating to your CMS requires a six-month data transformation project, the lock-in works both ways. Good content portability means standard formats in and out, APIs that don't require proprietary clients, and content that's structured enough to be useful outside the system that created it.

Export all your content and see if another system can read it without a custom parser. Try switching your frontend framework without touching the content layer. If a new integration protocol emerges next year, you want adding support to be a weekend project, not a six-month rebuild.

Old laws for new times

These laws aren't new. Most of them have been understood by the CMS community for a decade or more. What's new is that vibe coding makes it easy to build something that violates all ten in a weekend and looks great doing it. The hard problems don't show up in the demo. They show up when a second editor joins, when content needs to flow to a channel you didn't plan for, when an AI agent tries to operate the system programmatically, or when you need to migrate and discover your content is welded to your infrastructure.

Build a CMS if you want to. The tools have never been better for it. But if you want other people to use it, these are the problems you'll need to solve. And if you're evaluating one, these are the questions to ask. Feel free to respond with your own list.

Notes

  1. 1.I can never decide if I like “CMSs” or “CMSes” more.
  2. 2.A “reference” here is some kind of indexed and queryable link between two separate pieces of content.
13 readers have lingered on this prose since Apr 2026

Down the rabbit hole