The Quiet Work
The structural work that holds organizations together. And what happens when AI removes it.
Every organization has people doing work the system can’t see.
Not their job title. Not what gets measured. The other work. The rerouting, the translating, the remembering why a decision was made three years ago when the documentation doesn’t say. When the people who made it have moved on. The judgment calls that keep a handoff from failing. The quiet compensation for a system that was never designed for the speed it’s operating at.
I’ve been calling this the quiet work. The people who do it rarely name it that way. They just know that if they stop, something breaks. And that when it works, no one notices.
This is structural work. It doesn’t appear on dashboards. It doesn’t show up in capacity models. It lives in the people who carry it, and it disappears when they’re no longer in the loop. Not because they left, but because the system stopped asking them.
Organizations have always run this way. Not because they chose to, but because this is the physics of systems at scale. Complexity generates ambiguity faster than any organization can resolve it. The gap between how the process is documented and how the work actually moves is absorbed by humans. Every day. In every function. Without acknowledgment, because acknowledging it would mean acknowledging that the system doesn’t work the way the org chart says it does.
This was sustainable when the speed of the organization was governed by the speed of the people inside it. The structural work set the pace. Judgment took time. Memory required asking the person who was there. Translation happened in hallways and one-on-ones. The system moved at human speed because humans were load-bearing.
Now AI enters. And it doesn’t know any of that.
AI operates at the speed the documented process claims to move. Not the speed it actually moves. It reads the workflow as designed and executes it as written. It doesn’t know that step four only works because someone calls procurement directly instead of submitting through the portal. It doesn’t know that the escalation path on paper hasn’t been used in two years because the real path runs through a Slack channel and a specific VP who answers after hours. It doesn’t know that the retention logic depends on a judgment call that three people in the org can make and none of them were consulted when the model was trained.
The AI isn’t wrong. The process was never right. It just had people in it who made it work anyway.
These are the same people who train new hires by saying “don’t follow the doc, here’s how it actually works.” Who carry the institutional memory that never made it into a system of record. Who see the failure before the dashboard does.
They’ve been doing structural work. Holding truth when the system distorts it. Exercising authority the org chart never granted them. Maintaining continuity across decisions, handoffs, and leadership changes that would otherwise lose their thread. They do this in the spaces where the organization’s design falls short. Every organization has these spaces. Most have more than they realize.
When you automate a process that depends on this work, you don’t get efficiency. You get exposure. The dysfunction that was always there, the authority gaps, the broken handoffs, the decisions nobody remembers making, surfaces. Not because AI created it, but because the person who was absorbing it is no longer in the loop.
This is not a technology problem. It’s not a change management problem. It’s a structural visibility problem.
Most organizations making AI deployment decisions are evaluating processes. What can be automated, what can be augmented, where there’s throughput to gain. Those are reasonable questions. But they assume that the process, as documented, is the system. It isn’t. The system is the process plus every human judgment and adaptation that makes it functional. And if you can’t see that work, you can’t account for what happens when it’s removed.
The diagnostic question isn’t “is this process automatable?” It’s “what structural work is hidden inside this process that automation will expose?”
Answering that requires a different instrument. Not performance metrics. Not process maps. Something that surfaces where authority doesn’t match responsibility, where decisions have lost their rationale, where the distance between what the organization says and how it actually operates has grown so wide that only people, specific people doing quiet work, are holding it together.
That’s what Decision & Responsibility Infrastructure™ is built to see. Not the process. The structure underneath it. The work that was always there, carried by people who were never given language for what they were doing, and rarely given credit.
The organizations that navigate AI adoption well won’t be the ones with the best models or the fastest deployment timelines. They’ll be the ones that understood what their people were actually doing before they automated it away.
The quiet work was never a bonus. It was the infrastructure.
-JG



This is such an insightful framing — especially the idea that “quiet work” is structural, not just extra effort from a few helpful people. I hadn’t really considered how much of an organization’s real operating model lives in the heads and habits of these folks, totally invisible to dashboards and capacity models, until you laid it out this way.
A few points really jumped out at me.
The gap between how work is documented and how it actually moves being absorbed by humans every day. That’s the clearest articulation I’ve seen of why things “work” even when the process on paper is broken.
The point that AI isn’t wrong, the process was never right. It forces leaders to confront the fact that we’ve relied on unrecognized human judgment and workarounds as the true infrastructure — and automation just exposes that.
Your distinction between evaluating processes and seeing the “Decision & Responsibility Infrastructure” underneath them. Most orgs are obsessing over automatable workflows without ever asking where authority, accountability, and real decision-making actually sit.
Finally, the idea that the organizations that will handle AI well are the ones that first understand what their people are actually doing before they automate it away. That’s a very different mandate than “move fast on AI” and a much more honest one for leaders.
Really sharp piece, Justin. This gives leaders a much better language — and responsibility — for seeing and honoring the quiet work before they break it.