I came across a Medium post titled “AI Will Replace 70% of Design System Work.” The premise is that most design system work, documentation, component building, token management, accessibility audits, is “structurally automatable,” and that the real value lies in governance. The author argues that teams need to move “upward from execution to orchestration” or risk becoming obsolete.

Some of that is true. AI can help generate docs. It can lint tokens. It can draft release notes. I use AI tooling in my own design system work, and it’s useful for certain tasks. But the article’s argument falls apart because it misunderstands what design system teams actually do.

The 70% number is made up

Let’s start with the headline claim. 70% of design system work is replaceable by AI. Where does that number come from? The article never says. There’s no methodology, no survey, no data. It’s a confident assertion dressed up as a finding. That’s not analysis. That’s vibes.

And it matters because numbers like that end up in executive slide decks. They get used to justify headcount decisions. When you throw around “70%” with no backing, you’re not starting a conversation; you’re handing leadership a reason to cut your team.

The whole middle is missing

The article describes two modes of design system work: production (automatable) and governance (not automatable). That’s a clean framework, but it leaves out where most of the actual work happens.

Support. Maintenance. The daily human work of keeping a system alive and useful.

Answering questions in Slack. Pairing with a product engineer who’s trying to use your component in a context you didn’t anticipate. Triaging a bug that only surfaces in one team’s specific tech-stack setup. Writing a migration guide that accounts for six different integration patterns across your org. Helping a designer understand why the system works a certain way so they can make better decisions in their product.

This is humans supporting humans. It’s the thing that actually drives adoption, and it doesn’t show up in the article at all. You can’t automate a relationship. You can’t deploy a governance framework and expect people to trust your system. Trust is built through responsiveness. When someone files an issue and gets a thoughtful reply the same day, that’s what earns buy-in. No contribution policy document does that.

Fast and wrong is expensive

The article mentions that AI-generated output “was not perfect, but it was fast.” That sentence is doing a lot of heavy lifting. Fast and wrong is expensive. Someone still needs to review every AI-generated component API and token structure, and that someone needs deep expertise to know what “right” looks like.

You’re not eliminating the expert. You’re just changing what they do with their hands. The review work that remains requires the same knowledge, maybe more, because now you’re also debugging AI’s confident mistakes alongside your own.

The feedback loop doesn’t automate

Design systems are living things. You ship a component, teams adopt it, they use it in ways you didn’t expect, they file bugs, they request variants, you learn from that and iterate. That feedback loop is the engine of a healthy system.

AI doesn’t have relationships with your consumers. It doesn’t know that one team keeps misusing your modal because their product has a weird user flow. It doesn’t pick up on the pattern that three different teams have asked for the same thing in three different ways. That’s institutional knowledge built through support work, through being present, through paying attention to how people actually use what you’ve built.

Governance without execution is just meetings

The article elevates governance as the strategic, non-automatable layer. Governance matters, I’m not arguing that it doesn’t. But governance without strong execution underneath it is just Confluence pages and meeting invites. You earn the right to govern through the quality and reliability of what you ship. The components have to work. The documentation has to be accurate. The releases have to not break things. That foundation isn’t maintenance you automate away. It’s the thing that makes governance credible.

The real risk

The article says the true risk is that “some design systems never evolved beyond being structured UI libraries.” I’d argue the bigger risk is articles like this giving leadership permission to gut the teams that make systems work. If a VP reads “AI will replace 70% of this function” and takes it at face value, the people who get cut aren’t the governance strategists. It’s the engineers and designers who do the daily work of building, supporting, and maintaining the system.

Design systems aren’t factories, and they aren’t governance frameworks. They’re a service. The value lives in the ongoing, responsive, human relationship between the system team and the people who depend on it. AI is a useful tool in that work. It’s not a replacement for the people doing it.

Leave a Comment

Your email address will not be published. Required fields are marked *

To respond on your own website, enter the URL of your response which should contain a link to this post's permalink URL. Your response will then appear (possibly after moderation) on this page. Want to update or remove your response? Update or delete your post and re-enter your post's URL again. (Find out more about Webmentions.)