The Reassuring Mammal (English version)
The illusion of the human at the centre and the invisible work AI is quietly absorbing
There is a quiet fact circulating inside organisations that rarely gets named directly: generative AI (especially large language models) tends to work best when it is not asked to decide anything. Not when it has to close an issue, choose between trade-offs, or produce an irreversible outcome. It works when it can remain open. When it can respond. When it can absorb. This is not a dramatic claim about autonomy or replacement. It is simply visible in how these systems are actually used, once you look past how we prefer to describe them.
The usage data, including from the companies building the models, points in the same direction. A considerable portion of interaction is not sharply task-oriented. There is no tightly framed question, no clearly verifiable output, often no real end point. Anthropic has described many exchanges as exploratory: conversations that seek continuity rather than resolution. Google DeepMind has observed similar patterns in knowledge work environments, referring explicitly to emotional reliance patterns: moments when the model is kept open not to complete a task but to help regulate cognitive and emotional load during uncertainty. The system is not necessarily being consulted for an answer. It is being maintained as a steady presence while someone thinks.
What recurs, then, is not the use of AI as an operational accelerator, but as a conversational stabiliser. It does not necessarily speed up a process, optimise a pipeline, or unlock a stuck decision. More often it accompanies the decision while it is not being made. It fills the interval between hypotheses. It gives provisional linguistic shape to thoughts that are not yet ready to be stated elsewhere. In environments where uncertainty is not episodic but continuous, that function has practical value, sometimes more than efficiency.
To make sense of this, it helps to question whether “productivity” is still the right word for a great deal of senior work. The term implies defined tasks, clear objectives, measurable outputs. Yet much managerial and cognitive labour now consists of working in conditions that are structurally unclear: waiting for external variables to settle, holding together people who disagree but cannot afford open conflict, naming emotional states that have not stabilised sufficiently to be confronted directly. In such situations the primary requirement is not speed. It is continuity. It is preventing the room from fracturing.
Language models are not inherently superior at analytical reasoning. They still require well-posed problems and explicit constraints. Where a task is deterministic, their advantage is limited and often overstated. But the relevant comparison is rarely taking place there. It is taking place in the informal and largely unaccounted work that does not appear in performance metrics but quietly sustains organisations: the need to feel heard, to feel less exposed, to test half-formed ideas without consequence.
This is where the model finds its natural position. Not as an authority, but as an interlocutor that does not insist on closure. It responds without imposing urgency. It produces form without demanding commitment. It does not create visible asymmetries or reputational risk. It offers continuity without cost. That absence of consequence is not incidental; it is precisely what makes it usable.
The uncomfortable part emerges when this is placed alongside what we commonly label leadership. A considerable portion of everyday leadership (not the dramatic, crisis-facing variety but the diffuse, daily kind) consists less in decisive acts and more in managing atmosphere. Maintaining tone. Framing uncertainty. Reassuring without overpromising. Holding a steady narrative while variables remain unresolved. This work is not trivial; it holds organisations together. But it is not the same as making hard calls under exposure. It does not require signing one’s name under risk or explicitly naming trade-offs that will produce losers.
Much of it, in other words, is linguistic continuity under pressure.
For years this emotional and atmospheric labour has been embedded in coordination roles without being formally recognised as such. Meetings that do not exist primarily to decide, but to stabilise. Feedback that does not redirect strategy, but maintains the sense of coherence. Repeated phrases that regulate temperature. It is work, and it is tiring work, because it exposes the individual performing it to scrutiny and social consequence.
When a conversational system absorbs part of this function, the distinction is not that it “understands” better. The distinction is that it carries no social memory. Speaking to a model does not create obligation. It does not generate reputational exposure. It does not alter status relations. It is a relationship without consequence, and therefore one that can be sustained indefinitely.
At this point, a reassuring story tends to surface: the human remains at the centre, the AI assists. The human decides, the AI suggests. It is an intuitively satisfying framing, preserving hierarchy while allowing adoption. Yet it risks misdescribing what is happening. The shift is not primarily about assistance at the point of decision. It is about the redistribution of the invisible labour that surrounds decision: the thinking aloud, the clarifying, the emotional buffering, the containment of ambiguity.
To say that the human remains at the centre presumes that there is still a coherent centre. Contemporary knowledge work is already fragmented, distributed and continuously renegotiated. The model does not displace a stable core. It enters a system in which the core has long been diffused, and it settles into the interstices. Formal responsibility remains human and visible. But the everyday terrain on which decisions mature is increasingly mediated elsewhere.
There is no need for melodrama here. Nor for optimism. What is taking place resembles a functional reallocation more than a coup. The most stable relationship inside many organisations is no longer strictly interpersonal; it increasingly runs between individuals and language systems that help them manage uncertainty without consequence.
That arrangement works. Which may be precisely why it is unlikely to be examined too closely.
Human & Machine studies how judgement fails under complexity. This piece is part of that work.


