Your coding agent already speaks your domain language. Does your code?

Ubiquitous language, meet your new audience

Software engineers have always known, at some level, that code should use the language of the business. Nobody in 1995 was naming their database table DataStorageRectangle instead of Customer. In 2003, Eric Evans published Domain-Driven Design and gave rigor to this informal practice.

Evans called this practice “ubiquitous language”—a shared vocabulary between developers and domain experts, embedded directly in the code—and built a catalog of patterns around it. As Martin Fowler put it, the underlying idea had been around for decades; Evans’ contribution was “developing a vocabulary to talk about this approach.”

Twenty years later, many codebases still don’t do it consistently. Too often, the cost of discipline seemed to outweigh the benefit.

But ubiquitous language has a new stakeholder now, and it changes the calculus: the AI writing your code.

Why coding agents care about your type names

LLMs are trained on the real world. They know what a “reservation” is, what a “patient” is, what an “invoice” is. They have deep, rich representations of these concepts drawn from millions of documents. They are not trained on your ProcessingContextOptions or DataTransformHandler.

When your codebase speaks in domain language, the AI has a massive foundation of real-world knowledge to draw on. When your codebase speaks in implementation jargon, the AI is navigating blind, pattern-matching on structural cues without any semantic grounding. Every FooBarParams is a place where you could have written SearchQuery or ShippingLabel or ClaimAssessment—names the LLM actually understands, names that connect to concepts it can reason about.

And it compounds. If the agent sees ClaimAssessment used consistently across your codebase, it’s going to generate code that treats claims like claims. If it sees AssessmentServiceParams, it’s going to generate more params bags.

So with the advent of coding agents, a funny thing might be happening with the decades-old idea of ubiquitous language: The cost-benefit is shifting in its favor.

The cost of enforcing ubiquitous language has dropped, because AI agents can do the refactoring work that humans used to find prohibitively tedious. And the penalty for not doing it has spiked, because now your most prolific code contributor is the entity most harmed by implementation jargon.

Enforcing a domain language

So how do you steer a coding agent away from junk language and toward domain language?

You could put “always use domain-meaningful type names” in your AGENTS.md, and the agent would try. But prompts are suggestions. They get weighed against everything else in the context window. The agent might follow the guidance, or it might quietly drift as the context fills up.

That’s why I still believe that well-configured linters have a central place in AI coding, for the same reasons they’ve always mattered: They’re deterministic, they’re non-negotiable, and they apply to every change regardless of who or what wrote the code. A lint rule that blocks CI isn’t a suggestion. The code doesn’t ship until it’s satisfied. And unlike a prompt, a linter doesn’t lose effectiveness over the course of a long session.

This isn’t either/or, of course. You can have a linter as a hard check and a prompt as a soft suggestion. But the linter is the backstop. The prompt is not.

A linter for junk types

I’m always on the lookout for deterministic ways to stop Claude Code from turning into a slop cannon. So when I started noticing a specific code smell in TypeScript, I wrote forbid-junk-object-types to catch it. It’s a linter based on the theory that if a TypeScript object type is only used by a single function, it doesn’t describe the domain and should be replaced by code that does.

You may have seen this pattern yourself. A coding agent hits a max-params warning:

const renderProductDetails = (
  product: Product,
  user: User,
  onNavigate: (path: string) => void,
  onAddToCart: AddToCartFn,
  subscription: Subscription | null
) => { ... }

So it “fixes” the max-params warning by bundling parameters into a one-off type:

interface ProductDetailsParams {
  product: Product
  user: User
  onNavigate: (path: string) => void
  onAddToCart: AddToCartFn
  subscription: Subscription | null
}

const renderProductDetails = (params: ProductDetailsParams) => { ... }

ProductDetailsParams doesn’t describe anything in your domain. It’s a bag of arguments with a name on it. It tells you about the function signature, not about the business.

forbid-junk-object-types detects this mechanically: it uses the TypeScript Compiler API to find object types and interfaces that are only referenced by a single function. If a type is only used in one place, it’s probably not modeling a real concept. (It allows some things automatically: exported types, types in inheritance hierarchies, and React component props—since React’s component model is the one place where single-use types are idiomatic by design.)

What a fix looks like

When this linter fires, you can’t just slap a quick fix on it. You have to actually think. Is this function doing too much? Should this type be shared across boundaries? Is there a domain concept hiding in here?

There isn’t one single fix, but here’s one I’ve seen in the wild. Say the linter flags this type as single-use:

interface ProcessingContext {
  orderIds: string[];
  orders: Order[];           // must stay in sync with orderIds
  statuses: Map<string, Status>;
}

Three parallel data structures that have to stay synchronized—a classic source of bugs. The type exists to bundle them for one function, but it’s not modeling anything real. Here’s one way to refactor it:

interface TrackedOrder {
  orderId: string;
  order: Order;
  status: Status;
}
// Use Map<string, TrackedOrder> instead of ProcessingContext

The junk type ProcessingContext has gone away in favor of putting more emphasis on the domain concept of orders. Now, maybe you should just add status to Order—there are deeper design conversations that could point one way or the other. But whatever ends up in the code, it’s more likely to be a concept that your business actually cares about.

A linter can start a conversation

I started using forbid-junk-object-types in a few codebases, and it detected bona fide design issues right away. But fixing them was harder. Often, Claude Code would spin its wheels and eventually stop. Which makes sense: Most linters aren’t designed to kick off large refactors, so it’s reasonable for the agent to think “I can’t satisfy this new linter without touching ten files, so I’m going to just give up.”

So I added a fixing-violations.md file alongside the linter—a refactoring playbook that the agent can follow when a violation fires. It sets boundaries (never just rename the type, never create a meaningless wrapper, never inline a named type to hide it) and provides a prioritized workflow: try deleting unused code first, then inlining functions if they have single callers, then consolidating types across function boundaries, then deeper restructuring. It also clarifies the intention of the linter: Anybody who adds it to a codebase wants regular refactoring to keep the language focused on the domain.

(Funny how I ended up with a tiny, specific version of Martin Fowler’s Refactoring: small mechanical steps, behavior verified by tests. This is exactly the kind of work agents are good at.)

This did the trick. Without the playbook, Claude Code tries a grab-bag of superficial fixes and stops in frustration. With the playbook, it follows a proper refactoring sequence and finds a solution. The linter identifies the smell, and the playbook gives the agent a path back to correctness.

Getting to quality, eventually

Just to be clear, the goal of this linter is not code that is magically designed as well as if you hired Martin Fowler to write it himself. Instead, it provides guardrails that define the outer bounds of acceptable design for AI code being written at speed.

Sometimes, using this with Claude Code feels like shooting a pinball through a series of bumpers: the ball bounces off max-params and forbid-junk-object-types, inlines functions, finds a different way to extract, pursues a dead end or two, and eventually it hits a target labeled “Domain-Driven Design” for a thousand points. Don’t watch it too closely if you’re impatient.

But in my experience across several codebases, this linter makes AI-generated TypeScript easier for me as a human to read, review, and understand. It means I can stay in the loop as a reviewer even as the volume of AI-generated code increases. It reduces the odds that the codebase will drift into a thicket of meaningless types.

That’s not artisanal code craftsmanship. It’s a practical floor on quality, which is a problem lots of engineering teams are trying to solve right now.

Tightening the ratchet with real-world language

Over time, linters create a positive feedback loop with AI coding. Replace any with string and the LLM has better type information next time. Fix a floating promise and the LLM is less likely to generate dangling async calls. The codebase gets better, the AI’s context gets better, the AI’s output gets better.

But most linters deposit structural information into the codebase—this is a string, this is async, this returns a number. The LLM doesn’t have a deeper understanding of string than your type checker does.

When a coding agent works with forbid-junk-object-types, it deposits something different: semantic information. And semantic information is where LLMs have the most to draw on, because that’s what their training data is rich in. ESLint knows nothing about orders. The LLM knows an enormous amount about orders.

So I’m optimistic that this ratchet will help my AI-generated codebases over time. It doesn’t just improve code quality in general. It improves it in the exact dimension where my AI collaborator benefits most.

This remains an experiment. I don’t know of anyone else pursuing a path like this—using a linter to push coding agents toward domain-driven design, with a refactoring guide to show them the way. I’ve been using it across several codebases, including some in production, and I believe it’s leading to better outcomes. I’m writing it up now because I’d love to hear if it’s useful for other people too.

A linter that says “format this correctly” is table stakes. A linter that says “think about your domain” can be a conversation with every agent that touches your codebase.

forbid-junk-object-types on GitHub · forbid-junk-object-types on npm