Seven musings on AI bureaucracy
My working AI productivity theory (seeking similar takes): AI’s bottleneck will be bureaucracy. As tasks automate, throughput gets paced by review queues. “Fewer people, faster” becomes more waiting & loophole finding. Jobs concentrated in compliance, risk, review, audit, legal
— Matt Grossmann (@mattgrossmann.bsky.social) September 8, 2025 at 11:53 AM
In modern orgs, the binding constraint is the administrative stack. Multi-layer queues with sign-offs + documentation. They expand when uncertainty/reputation risk rises—the AI environment. AI shifts the marginal human hour into unusually-labor-using governance.
— Matt Grossmann (@mattgrossmann.bsky.social) September 8, 2025 at 11:53 AM
In modern orgs, the binding constraint is the administrative stack. Multi-layer queues with sign-offs + documentation. They expand when uncertainty/reputation risk rises—the AI environment. AI shifts the marginal human hour into unusually-labor-using governance.
— Matt Grossmann (@mattgrossmann.bsky.social) September 8, 2025 at 11:53 AM
It’s a novel idea! Some random thoughts, probably wrong but hopefully amusing:
-
Some companies will compare their administrative bill to the cost of a company-destroying mistake and consider it a cheap expense. Smarter companies will realize that speed, not cost, is the killer here, as the bureaucracy slows down the entire company no matter how cheap it is.
-
Obviously there are AI companies trying to automate approval activities today: legal review, code review, H.R., public relations, etc. But as with all automation, we’d expect them to leave the hard stuff for last.
-
It’s darkly hilarious to think about these problems in the midst of *waves hands at everything*, because training a survival algorithm requires the social environment to stay roughly constant, and like, have you read the news? What’s the best way for a retailer to plan tariff compliance next year? If you’re training a reputational model, can you predict the financially optimal amount of wokeness or anti-wokeness?
-
It’s hard to train a model to look for high-impact tail risks. The data set for “regulatory or marketing fail that destroys a company” is fairly small. And you can try using other signals and correlating them to your tail risk, but that might never work. Yeah, maybe historical data around corporate reputation helps you predict PR disasters. But maybe your CEO is going to say the N-word on an internal conference call tomorrow.
-
Regarding regulation: A truly pro-AI government would force agencies to issue way more substantive preliminary guidance as inputs to building compliance models. Of course, this sort of transparency 1) requires spending more tax dollars on unpopular government officials and 2) likely undercuts the private compliance professionals who build careers and buy second homes off of the opaqueness of government action. So don’t hold your breath.
-
You wouldn’t expect anybody to optimize their compliance down to the last dollar. Sometimes you hit a tail risk by overfitting your model, so maybe you give back some of your savings in the form of redundancy. If you can make internal auditors 10x productive with quirky AI tools, maybe it pays to hire 3 isolated teams and tell them “use whatever tools you want, but they can’t be the same as the others”.
-
Another way to reduce the pressure on administration: Much sharper accountability. Maybe in the future, AI-pilled employees are treated like CEOs or fund managers no matter what the underlying domain is. We’ll have 10% of the software engineers, they’ll each have a base pay of $2 million, and they’ll be fired after their first big mistake. Yeah, you wanted more leverage, buddy. And you got it good and hard.