<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Human and Machine]]></title><description><![CDATA[Human & Machine studies how judgement fails under complexity, when systems begin to decide while people still believe they are in control.]]></description><link>https://www.humanandmachine.com</link><generator>Substack</generator><lastBuildDate>Wed, 13 May 2026 09:24:46 GMT</lastBuildDate><atom:link href="https://www.humanandmachine.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Human and Machine]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[hello@humanandmachine.com]]></webMaster><itunes:owner><itunes:email><![CDATA[hello@humanandmachine.com]]></itunes:email><itunes:name><![CDATA[Dario D’Aprile]]></itunes:name></itunes:owner><itunes:author><![CDATA[Dario D’Aprile]]></itunes:author><googleplay:owner><![CDATA[hello@humanandmachine.com]]></googleplay:owner><googleplay:email><![CDATA[hello@humanandmachine.com]]></googleplay:email><googleplay:author><![CDATA[Dario D’Aprile]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Straight talk and the theatre of damage]]></title><description><![CDATA[Why the announcement of honesty is where honesty often ends]]></description><link>https://www.humanandmachine.com/p/straight-talk-and-the-theatre-of</link><guid isPermaLink="false">https://www.humanandmachine.com/p/straight-talk-and-the-theatre-of</guid><dc:creator><![CDATA[Dario D’Aprile]]></dc:creator><pubDate>Thu, 16 Apr 2026 17:14:10 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ir7a!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd15c645f-79e9-49ed-bce5-1a4c7fec4437_1200x981.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ir7a!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd15c645f-79e9-49ed-bce5-1a4c7fec4437_1200x981.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ir7a!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd15c645f-79e9-49ed-bce5-1a4c7fec4437_1200x981.webp 424w, https://substackcdn.com/image/fetch/$s_!ir7a!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd15c645f-79e9-49ed-bce5-1a4c7fec4437_1200x981.webp 848w, https://substackcdn.com/image/fetch/$s_!ir7a!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd15c645f-79e9-49ed-bce5-1a4c7fec4437_1200x981.webp 1272w, https://substackcdn.com/image/fetch/$s_!ir7a!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd15c645f-79e9-49ed-bce5-1a4c7fec4437_1200x981.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ir7a!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd15c645f-79e9-49ed-bce5-1a4c7fec4437_1200x981.webp" width="1200" height="981" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d15c645f-79e9-49ed-bce5-1a4c7fec4437_1200x981.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:981,&quot;width&quot;:1200,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ir7a!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd15c645f-79e9-49ed-bce5-1a4c7fec4437_1200x981.webp 424w, https://substackcdn.com/image/fetch/$s_!ir7a!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd15c645f-79e9-49ed-bce5-1a4c7fec4437_1200x981.webp 848w, https://substackcdn.com/image/fetch/$s_!ir7a!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd15c645f-79e9-49ed-bce5-1a4c7fec4437_1200x981.webp 1272w, https://substackcdn.com/image/fetch/$s_!ir7a!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd15c645f-79e9-49ed-bce5-1a4c7fec4437_1200x981.webp 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Martin Parr</figcaption></figure></div><p>A leader opens a meeting by saying they want to be very clear. The phrase arrives before the content does, carrying its own small authority. The room orients; notebooks stop moving; the person who was about to raise a concern adjusts the concern before they raise it. Nothing has been said yet about the subject at hand. A licence has been issued, which is what the phrase was there to do.</p><p>Most people who reach for this phrase mean something real by it. They are not, in their own account, performing aggression. They believe the organisation has gone soft, that meetings have become theatres of hedging where real decisions require someone willing to absorb the discomfort of saying what others will not. The belief is not empty. Anyone who has sat in a room where three people in succession decline to commit knows the cost of that softness. The phrase &#8220;let me be very clear&#8221; carries the residue of that frustration. It is often spoken by people who have at some earlier point in their career paid a price for being too diplomatic.</p><p>That is the part worth sitting with before anything else. The impulse is not fraudulent. It names a real problem; it refuses the evasion the room seems to be settling into. Somewhere inside the performance is a judgement that the group has exceeded its ambiguity budget, that further hedging now costs more than plain speech. That judgement is frequently correct.</p><p>It is also, in the same breath, the point where the mechanism begins to slip.</p><p>The word &#8220;straight&#8221; flatters the speaker more than it describes the speech. It suggests that perception has travelled into language without distortion; that what the room is about to hear is the world itself, unmediated. In practice the phrase is most often reached for at the moment mediation has become hardest. The speaker is tired; a deadline is pressing; a previous conversation did not land; the proposal on the table implicates the speaker in a way the speaker is not yet willing to examine. The declaration of clarity functions as an exemption from the slower work that precision would require.</p><p>There is a physiological detail that matters here. From inside the body conviction and judgement produce the same signals; the narrowing of attention that accompanies clear thought looks identical to the narrowing of attention that accompanies pressure. This is why the counterfeit is so durable. It does not feel like a counterfeit to the person producing it. It feels like certainty, so it becomes theatrically persuasive. The room reads the force as competence; the compliance that follows is then treated, retroactively, as evidence that the force was warranted.</p><h3>Where the word begins to drift</h3><p>Over time the vocabulary shifts under the pressure of repetition. Honesty, in such environments, begins to mean something closer to willingness-to-inflict; candour starts to refer to the ability to say a thing that will make someone flinch. The words keep their prestige; they quietly lose their referent. The drift is rarely noticed by those doing the speaking. It is noticed, instantly, by those on the receiving end, who learn to encode the new meanings without naming them. They begin bringing conclusions rather than the reasoning behind them. They stop saying they do not know. The organisation becomes quieter in ways that feel like progress to the speaker at the head of the table.</p><p>The cost sits in what the room stops saying. A risk that should have surfaced in minute eleven gets carried home instead. A dependency three days from blocking a launch gets absorbed into someone&#8217;s evening. The meeting ends on time; the follow-up is scheduled; the system continues to function. The speaker leaves with the feeling of having moved the work forward. The work has not moved forward. It has relocated into private calculations the speaker will never see.</p><p>Consider two openings of the same meeting. A product team has presented a roadmap with numbers the room knows to be optimistic. In the first, a senior voice interrupts on the third slide to say this is wishful thinking, that the team has been here before with the same slippage pattern. The room goes still; the presenter nods tightly. A follow-up is scheduled. The presenter spends the next forty-eight hours rebuilding the numbers in ways that preserve their original meaning while defending them against the objection. The risk stays in the plan; it is now better camouflaged.</p><p>In the second, the same senior voice waits until the end of the presentation. The question is about the named dependencies; specifically which has the longest lead time; specifically what the plan is if it slips. The room stills in a different way. The presenter either has the answer or does not. If the answer is there, the work advances; if not, the absence becomes visible without anyone needing to perform disappointment. The dependency surfaces; the contingency gets discussed. The credibility of the presenter is not consumed in the process, which means the revision that follows is more likely to be honest than defensive.</p><p>What differs between the two is small. The criticism stays attached to the plan rather than sliding onto the person who made it. The force is similar; its direction is not. That small difference moves almost everything downstream, although nothing in the second exchange is soft; the challenge is direct and the implication obvious.</p><h3>What the announcement does</h3><p>The phrase &#8220;let me be very clear&#8221; does a particular kind of work inside the speaker as much as inside the room. It functions as pre-authorisation. Before the sentence that follows is examined, before the listener has had a chance to register what is coming, a verdict has already been rendered on the speech itself: it is already understood to be honest; already understood to be brave. This pre-authorisation is the mechanism&#8217;s most protected feature because it operates below the level at which the speaker can question it. The speech is pre-cleared for release. Anything challenging it will look like an inability to tolerate honesty rather than an engagement with the content.</p><p>Counterfeit straight talk produces a clean social signal; it leaves behind muddy information. The real version produces clean information; it leaves the social architecture intact enough that the information can be used. The difference is not about volume or tone. It is about whether the criticism stays attached to the object or slides onto the person; whether judgement is exercised before the language arrives or only after. The announcement of honesty is the moment honesty is most often suspended, because the phrase does the moral work the speaker no longer has to do.</p><p>Workplaces that valorise bringing the whole self to work intensify the confusion. Under that logic any restraint starts to look like politics; abrasion takes on the prestige of truth. The leader who says they are just being real is treated as courageous. Those who manage their delivery are suspected of managing their meaning. The culture learns that emotional governance is optional, that the only fully trustworthy speaker is the one willing to inflict.</p><p>None of this is new, which is part of why it persists. The confusion has enough history that most people have stopped hearing the contradiction inside the words. The phrase stays in rotation because it works. It produces the behavioural compression its user wants; it transfers the cost of the conversation onto the people who were already paying it. The question the phrase answers, quietly, is closer to how to leave the meeting feeling that one was the adult in the room.</p><p>That the phrase keeps arriving, meeting after meeting, career after career, is not evidence that it is ineffective. It is evidence that it is doing exactly what it is designed to do.</p><div><hr></div><p><strong>Human &amp; Machine studies how judgement fails under complexity. This piece is part of that work.</strong><br><br>Does this pattern feels familiar? You can explore it further through the <a href="https://www.humanandmachine.com/p/decision-audit">Decision Audit</a>.</p>]]></content:encoded></item><item><title><![CDATA[The cost of smooth misunderstanding]]></title><description><![CDATA[Why conversations preserve alignment while reducing the accuracy of what is being exchanged]]></description><link>https://www.humanandmachine.com/p/understanding-and-the-loss-of-fidelity</link><guid isPermaLink="false">https://www.humanandmachine.com/p/understanding-and-the-loss-of-fidelity</guid><dc:creator><![CDATA[Dario D’Aprile]]></dc:creator><pubDate>Fri, 10 Apr 2026 06:15:49 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!neBr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb91eb9b-9874-4043-b325-c95b3093990a_1772x1163.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!neBr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb91eb9b-9874-4043-b325-c95b3093990a_1772x1163.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!neBr!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb91eb9b-9874-4043-b325-c95b3093990a_1772x1163.jpeg 424w, https://substackcdn.com/image/fetch/$s_!neBr!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb91eb9b-9874-4043-b325-c95b3093990a_1772x1163.jpeg 848w, https://substackcdn.com/image/fetch/$s_!neBr!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb91eb9b-9874-4043-b325-c95b3093990a_1772x1163.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!neBr!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb91eb9b-9874-4043-b325-c95b3093990a_1772x1163.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!neBr!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb91eb9b-9874-4043-b325-c95b3093990a_1772x1163.jpeg" width="1200" height="787.9120879120879" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bb91eb9b-9874-4043-b325-c95b3093990a_1772x1163.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:956,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!neBr!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb91eb9b-9874-4043-b325-c95b3093990a_1772x1163.jpeg 424w, https://substackcdn.com/image/fetch/$s_!neBr!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb91eb9b-9874-4043-b325-c95b3093990a_1772x1163.jpeg 848w, https://substackcdn.com/image/fetch/$s_!neBr!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb91eb9b-9874-4043-b325-c95b3093990a_1772x1163.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!neBr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb91eb9b-9874-4043-b325-c95b3093990a_1772x1163.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Luigi Ghirri in Bologna, 1973</figcaption></figure></div><p>In close interactions what is commonly described as understanding does not function as a neutral act of recognition. It operates as a way of regulating the exchange. What is expressed does not remain in its original form for long. It is adjusted often subtly and without explicit intention so that it can be received without requiring a change in the structure of the interaction.</p><p>The movement is difficult to detect from within the conversation itself. One person speaks at length sometimes returning to the same point not because it is unclear but because it has not yet settled anywhere. The other signals attention in familiar ways maintaining presence and allowing space. The response arrives without interruption and carries the tone of alignment yet it introduces a slight redirection. The original experience is not rejected but it is reshaped into something that can be engaged with more easily.</p><p>What is often named as empathy in these moments is less a sharing of internal state than a form of modulation.</p><p>The experience being described is received and almost immediately brought within a range that the interaction can accommodate. Intensity is reduced to a level that does not demand escalation. Ambiguity is clarified so that it does not suspend the conversation. Persistence is softened so that it does not create pressure for a response that is not readily available. None of this is typically deliberate and it rarely presents itself as intervention. It appears as care.</p><p>Understanding in this sense is not simply the act of grasping what is being said. It is a transformation.</p><p>Something that may be uneven excessive or not yet fully articulated is returned in a version that is more stable more coherent and more compatible with dialogue. The return is easier to hold. It is also less demanding. Once something has been understood in this way it no longer occupies the same space in the interaction.</p><p>A consistent sequence can be observed across different contexts. An experience is expressed. It is filtered through an internal frame shaped elsewhere. The filtered version becomes the basis for response. The response closes part of what had been opened. Advice reassurance reframing validation differ in tone and follow the same direction. The interaction remains continuous because what enters it is adjusted before it can disrupt it.</p><p>The direction of this adjustment is not arbitrary. Interactions tend to preserve their own continuity. The exchange needs to proceed roles need to remain legible and the conversation needs to retain a form that can be sustained. Expressions that would significantly disturb these conditions are difficult to hold in their original form. They are therefore reshaped into versions that can be integrated without forcing a reorganisation of the interaction.</p><p>The effect of this process is a form of efficiency. The exchange retains its rhythm. Roles remain legible. The disturbance introduced by the original expression is absorbed without requiring a redistribution of positions. Intervention restores coherence to the interaction and allows it to proceed without visible friction.</p><p>From within the exchange this is rarely experienced as reduction. It is experienced as support.</p><p>Over time the adjustment begins to occur earlier. What is expressed arrives already shaped in anticipation of how it will be received. Elements that would be difficult to hold within the interaction are reduced before they are spoken. The work of modulation moves upstream. The conversation becomes smoother and less exact.</p><p>What diminishes is not communication but fidelity.</p><p>Aspects of experience that do not translate easily into manageable form lose presence. What remains is what can be integrated without altering the structure of the exchange. Feedback becomes less disruptive. The interaction becomes more sustainable in its current configuration and less capable of registering what would require it to change.</p><p>The behaviours associated with empathy continue to be present and can be enacted with precision. Listening without interruption reflecting acknowledging avoiding explicit judgement. None of these are misleading in isolation. They do not require the person performing them to be affected in a way that alters their position within the interaction. The sequence can be completed while the underlying structure remains intact.</p><p>When the process is suspended even briefly the character of the interaction shifts. The original experience remains present without being reformulated. The exchange slows. The roles become less defined. There is no immediate contribution that restores balance.</p><p>This interval is typically short.</p><p>The impulse to reshape to clarify to respond returns quickly and the interaction moves back into a form that can be sustained.</p><p>The issue is not that empathy fails. It is that it fulfils a different function from the one it names. It allows the interaction to continue without requiring either side to remain in positions that are more difficult to hold.</p><p>And what cannot be held in its original form is gradually reformulated until it can be.</p><div><hr></div><p><strong>Human &amp; Machine studies how judgement fails under complexity. This piece is part of that work.</strong><br>Does this pattern feels familiar? You can explore it further through the <a href="https://www.humanandmachine.com/p/decision-audit">Decision Audit</a>.</p>]]></content:encoded></item><item><title><![CDATA[The quiet erosion of agency]]></title><description><![CDATA[How simulation systems quietly relocate uncertainty and change what it means to decide.]]></description><link>https://www.humanandmachine.com/p/the-quiet-erosion-of-agency</link><guid isPermaLink="false">https://www.humanandmachine.com/p/the-quiet-erosion-of-agency</guid><dc:creator><![CDATA[Dario D’Aprile]]></dc:creator><pubDate>Tue, 03 Mar 2026 11:01:35 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Rg80!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe085469c-321d-410a-8718-34aa2931a820_3543x2897.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Rg80!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe085469c-321d-410a-8718-34aa2931a820_3543x2897.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Rg80!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe085469c-321d-410a-8718-34aa2931a820_3543x2897.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Rg80!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe085469c-321d-410a-8718-34aa2931a820_3543x2897.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Rg80!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe085469c-321d-410a-8718-34aa2931a820_3543x2897.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Rg80!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe085469c-321d-410a-8718-34aa2931a820_3543x2897.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Rg80!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe085469c-321d-410a-8718-34aa2931a820_3543x2897.jpeg" width="1200" height="981.5934065934066" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e085469c-321d-410a-8718-34aa2931a820_3543x2897.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:1191,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Rg80!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe085469c-321d-410a-8718-34aa2931a820_3543x2897.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Rg80!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe085469c-321d-410a-8718-34aa2931a820_3543x2897.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Rg80!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe085469c-321d-410a-8718-34aa2931a820_3543x2897.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Rg80!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe085469c-321d-410a-8718-34aa2931a820_3543x2897.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Ramsgate, Kent, Martin Parr</figcaption></figure></div><p>In organisations that rely on simulation systems decisions increasingly arrive already shaped; by the time a senior figure is asked to choose scenarios have been modelled ranked and stress-tested. One path carries statistical weight and is presented as rational; the room settles around it with limited resistance and the decision appears to occur there even though much of what mattered has already taken place elsewhere.</p><p>This arrangement is typically described as progress; more variables are processed more futures tested and fewer blind spots remain. The machine augments judgement and reduces error and that may well be true; the more consequential shift lies not in accuracy but in where uncertainty is allowed to reside.</p><p>Agency is often confused with authority or autonomy; it is neither. Agency is the experience of being the point at which several live possibilities narrow into one direction without full protection from consequence; it involves exposure to error and exposure to having chosen differently. Without that exposure there is coordination governance and procedural alignment; something essential becomes lighter.</p><p>Simulation does not remove the human from the process; it reorganises the conditions under which choice appears. Relevance is pre-selected success is defined in advance and risk tolerance is encoded before anyone enters the room; when outputs are presented the range of viable action has already been narrowed. The narrowing feels objective because it is statistical and neutral because it is modelled; the signature remains human yet the contraction has occurred upstream.</p><p>In practice this produces relief; the model can hold more variables than any individual could manage testing assumptions at scale and generating probability bands sensitivities and downside scenarios. If doubt persists another iteration can be run; uncertainty is not confronted directly but processed until it becomes structured. The guiding question gradually shifts from what should we do to what does the model support; support begins to replace conviction.</p><p>To move against a statistically weighted recommendation requires more than strategic argument; it demands a challenge to the architecture that produced it. That architecture is rarely visible in the room; it sits earlier in the chain in the selection of metrics the framing of the problem and the exclusion of what cannot be quantified. Those were acts of judgement yet they were made at a different moment and often by different actors; by the time leadership convenes the field of thinkable options has already been curated.</p><p>Agency remains intact in formal terms; in substance it becomes thinner.</p><p>There is comfort in that thinning; deciding under visible uncertainty carries weight. To choose without procedural cover is to accept the possibility of being wrong in a way that cannot be redistributed; simulation offers insulation. If the outcome disappoints the explanation is immediately available; the data were robust the assumptions reasonable and the probabilities aligned with accepted standards. Responsibility can be traced back to process design rather than to a singular act of judgement; traceability is easier to inhabit than ownership.</p><p>Over time a subtle pattern stabilises; confidence derives less from interior conviction and more from alignment with analytical output. Pride moderates and blame diffuses; the emotional intensity of deciding lowers almost imperceptibly. The role shifts from author to validator even if the language of leadership remains unchanged; conflict decreases because disagreement must now contest quantified projections rather than intuition and the system absorbs volatility that individuals once carried.</p><p>Yet the human requirement for agency does not disappear simply because exposure has been engineered out of the visible act; agency is tied to meaning. To experience oneself as an agent is to recognise that one&#8217;s judgement alters direction in a way that cannot be entirely pre-justified; when optimisation frames decisions as the logical outcome of sufficient modelling personal judgement has less surface on which to register.</p><p>This rarely produces open resistance; it produces adaptation. People learn to speak in the grammar of the model and debate assumptions inside the simulation rather than values outside it; legitimacy attaches to analytical coherence. Questions that resist translation into metrics gradually lose standing; what kind of organisation this is what it refuses to pursue and what it would risk without probabilistic reassurance become harder to stage without appearing unsophisticated.</p><p>The system continues to function and often performs better by conventional measures; error rates decline volatility is dampened and decisions withstand scrutiny. From an operational standpoint the arrangement is difficult to criticise; the tension lies in what happens to individuals whose decisive exposure is consistently mediated.</p><p>Agency requires repetition; it requires repeatedly standing at the point where uncertainty has not yet been processed and allowing oneself to close it. If that moment is systematically absorbed by analytic infrastructure the capacity does not vanish but alters; individuals still decide yet they do so within boundaries they did not draw and cannot easily redraw. The narrowing becomes structural rather than situational.</p><p>Whether this is desirable or not is the wrong question; a more precise question concerns the kind of subject stabilised under such conditions. One less exposed to error less burdened by singular authorship and less acquainted with the interior weight of choosing without cover.</p><p>The organisation advances under increasingly comprehensive models; architectures refine themselves monitoring improves and feedback loops tighten. The human role remains visible and formally decisive.</p><p>But the experience of being the origin of direction grows quiet enough that it no longer interrupts the room.</p><div><hr></div><p>Human &amp; Machine studies how judgement fails under complexity. This piece is part of that work.</p>]]></content:encoded></item><item><title><![CDATA[The reassuring mammal]]></title><description><![CDATA[The illusion of the human at the centre and the invisible work AI is quietly absorbing]]></description><link>https://www.humanandmachine.com/p/the-reassuring-mammal</link><guid isPermaLink="false">https://www.humanandmachine.com/p/the-reassuring-mammal</guid><dc:creator><![CDATA[Dario D’Aprile]]></dc:creator><pubDate>Wed, 25 Feb 2026 13:12:32 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!BN3M!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe33a5633-a12a-4dec-b6a9-ca606172c67a_1024x682.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!BN3M!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe33a5633-a12a-4dec-b6a9-ca606172c67a_1024x682.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!BN3M!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe33a5633-a12a-4dec-b6a9-ca606172c67a_1024x682.webp 424w, https://substackcdn.com/image/fetch/$s_!BN3M!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe33a5633-a12a-4dec-b6a9-ca606172c67a_1024x682.webp 848w, https://substackcdn.com/image/fetch/$s_!BN3M!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe33a5633-a12a-4dec-b6a9-ca606172c67a_1024x682.webp 1272w, https://substackcdn.com/image/fetch/$s_!BN3M!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe33a5633-a12a-4dec-b6a9-ca606172c67a_1024x682.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!BN3M!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe33a5633-a12a-4dec-b6a9-ca606172c67a_1024x682.webp" width="1200" height="799.21875" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e33a5633-a12a-4dec-b6a9-ca606172c67a_1024x682.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:682,&quot;width&quot;:1024,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:93788,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.humanandmachine.com/i/189130596?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe33a5633-a12a-4dec-b6a9-ca606172c67a_1024x682.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!BN3M!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe33a5633-a12a-4dec-b6a9-ca606172c67a_1024x682.webp 424w, https://substackcdn.com/image/fetch/$s_!BN3M!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe33a5633-a12a-4dec-b6a9-ca606172c67a_1024x682.webp 848w, https://substackcdn.com/image/fetch/$s_!BN3M!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe33a5633-a12a-4dec-b6a9-ca606172c67a_1024x682.webp 1272w, https://substackcdn.com/image/fetch/$s_!BN3M!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe33a5633-a12a-4dec-b6a9-ca606172c67a_1024x682.webp 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Martin Parr</figcaption></figure></div><p></p><p>There is a quiet fact circulating inside organisations that rarely gets named directly: generative AI (especially large language models) tends to work best when it is not asked to decide anything. Not when it has to close an issue, choose between trade-offs, or produce an irreversible outcome. It works when it can remain open. When it can respond. When it can absorb. This is not a dramatic claim about autonomy or replacement. It is simply visible in how these systems are actually used, once you look past how we prefer to describe them.</p><p>The usage data, including from the companies building the models, points in the same direction. A considerable portion of interaction is not sharply task-oriented. There is no tightly framed question, no clearly verifiable output, often no real end point. Anthropic has described many exchanges as exploratory: conversations that seek continuity rather than resolution. Google DeepMind has observed similar patterns in knowledge work environments, referring explicitly to emotional reliance patterns: moments when the model is kept open not to complete a task but to help regulate cognitive and emotional load during uncertainty. The system is not necessarily being consulted for an answer. It is being maintained as a steady presence while someone thinks.</p><p>What recurs, then, is not the use of AI as an operational accelerator, but as a conversational stabiliser. It does not necessarily speed up a process, optimise a pipeline, or unlock a stuck decision. More often it accompanies the decision while it is not being made. It fills the interval between hypotheses. It gives provisional linguistic shape to thoughts that are not yet ready to be stated elsewhere. In environments where uncertainty is not episodic but continuous, that function has practical value, sometimes more than efficiency.</p><p>To make sense of this, it helps to question whether &#8220;productivity&#8221; is still the right word for a great deal of senior work. The term implies defined tasks, clear objectives, measurable outputs. Yet much managerial and cognitive labour now consists of working in conditions that are structurally unclear: waiting for external variables to settle, holding together people who disagree but cannot afford open conflict, naming emotional states that have not stabilised sufficiently to be confronted directly. In such situations the primary requirement is not speed. It is continuity. It is preventing the room from fracturing.</p><p>Language models are not inherently superior at analytical reasoning. They still require well-posed problems and explicit constraints. Where a task is deterministic, their advantage is limited and often overstated. But the relevant comparison is rarely taking place there. It is taking place in the informal and largely unaccounted work that does not appear in performance metrics but quietly sustains organisations: the need to feel heard, to feel less exposed, to test half-formed ideas without consequence.</p><p>This is where the model finds its natural position. Not as an authority, but as an interlocutor that does not insist on closure. It responds without imposing urgency. It produces form without demanding commitment. It does not create visible asymmetries or reputational risk. It offers continuity without cost. That absence of consequence is not incidental; it is precisely what makes it usable.</p><p>The uncomfortable part emerges when this is placed alongside what we commonly label leadership. A considerable portion of everyday leadership (not the dramatic, crisis-facing variety but the diffuse, daily kind) consists less in decisive acts and more in managing atmosphere. Maintaining tone. Framing uncertainty. Reassuring without overpromising. Holding a steady narrative while variables remain unresolved. This work is not trivial; it holds organisations together. But it is not the same as making hard calls under exposure. It does not require signing one&#8217;s name under risk or explicitly naming trade-offs that will produce losers.</p><p>Much of it, in other words, is linguistic continuity under pressure.</p><p>For years this emotional and atmospheric labour has been embedded in coordination roles without being formally recognised as such. Meetings that do not exist primarily to decide, but to stabilise. Feedback that does not redirect strategy, but maintains the sense of coherence. Repeated phrases that regulate temperature. It is work, and it is tiring work, because it exposes the individual performing it to scrutiny and social consequence.</p><p>When a conversational system absorbs part of this function, the distinction is not that it &#8220;understands&#8221; better. The distinction is that it carries no social memory. Speaking to a model does not create obligation. It does not generate reputational exposure. It does not alter status relations. It is a relationship without consequence, and therefore one that can be sustained indefinitely.</p><p>At this point, a reassuring story tends to surface: the human remains at the centre, the AI assists. The human decides, the AI suggests. It is an intuitively satisfying framing, preserving hierarchy while allowing adoption. Yet it risks misdescribing what is happening. The shift is not primarily about assistance at the point of decision. It is about the redistribution of the invisible labour that surrounds decision: the thinking aloud, the clarifying, the emotional buffering, the containment of ambiguity.</p><p>To say that the human remains at the centre presumes that there is still a coherent centre. Contemporary knowledge work is already fragmented, distributed and continuously renegotiated. The model does not displace a stable core. It enters a system in which the core has long been diffused, and it settles into the interstices. Formal responsibility remains human and visible. But the everyday terrain on which decisions mature is increasingly mediated elsewhere.</p><p>There is no need for melodrama here. Nor for optimism. What is taking place resembles a functional reallocation more than a coup. The most stable relationship inside many organisations is no longer strictly interpersonal; it increasingly runs between individuals and language systems that help them manage uncertainty without consequence.</p><p>That arrangement works. Which may be precisely why it is unlikely to be examined too closely.</p><div><hr></div><p>Human &amp; Machine studies how judgement fails under complexity. This piece is part of that work.</p>]]></content:encoded></item><item><title><![CDATA[Why misalignment is not dysfunction]]></title><description><![CDATA[...and alignment becomes an excuse for inaction]]></description><link>https://www.humanandmachine.com/p/why-misalignment-is-not-dysfunction</link><guid isPermaLink="false">https://www.humanandmachine.com/p/why-misalignment-is-not-dysfunction</guid><dc:creator><![CDATA[Dario D’Aprile]]></dc:creator><pubDate>Wed, 21 Jan 2026 10:58:22 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!MQtx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F914cd71f-d72c-4f95-9a11-257a25db70f1_944x634.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In complex organisations, alignment is rarely what enables a decision. It is what becomes visible once the risk has been sufficiently diluted. By the time everyone is aligned, the decision has usually stopped being dangerous.</p><p>This is why alignment is so often requested precisely at the moment when something needs to be owned. It appears when a choice would create exposure: personal, political, reputational. Asking for more alignment is not a request for clarity. It is a request to remain unlocated.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MQtx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F914cd71f-d72c-4f95-9a11-257a25db70f1_944x634.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MQtx!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F914cd71f-d72c-4f95-9a11-257a25db70f1_944x634.jpeg 424w, https://substackcdn.com/image/fetch/$s_!MQtx!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F914cd71f-d72c-4f95-9a11-257a25db70f1_944x634.jpeg 848w, https://substackcdn.com/image/fetch/$s_!MQtx!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F914cd71f-d72c-4f95-9a11-257a25db70f1_944x634.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!MQtx!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F914cd71f-d72c-4f95-9a11-257a25db70f1_944x634.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MQtx!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F914cd71f-d72c-4f95-9a11-257a25db70f1_944x634.jpeg" width="1200" height="805.9322033898305" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/914cd71f-d72c-4f95-9a11-257a25db70f1_944x634.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:634,&quot;width&quot;:944,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Deja View by Martin Parr and The Anonymous Project is published by Hoxton Mini Press&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="Deja View by Martin Parr and The Anonymous Project is published by Hoxton Mini Press" title="Deja View by Martin Parr and The Anonymous Project is published by Hoxton Mini Press" srcset="https://substackcdn.com/image/fetch/$s_!MQtx!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F914cd71f-d72c-4f95-9a11-257a25db70f1_944x634.jpeg 424w, https://substackcdn.com/image/fetch/$s_!MQtx!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F914cd71f-d72c-4f95-9a11-257a25db70f1_944x634.jpeg 848w, https://substackcdn.com/image/fetch/$s_!MQtx!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F914cd71f-d72c-4f95-9a11-257a25db70f1_944x634.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!MQtx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F914cd71f-d72c-4f95-9a11-257a25db70f1_944x634.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Martin Parr, Miami, Florida, 1998</figcaption></figure></div><p>What tends to be misunderstood is that collective decision-making does not fail because people disagree. It stalls because no one wants to be identifiable as the one who made the call. Alignment offers a solution to this problem. It allows action without authorship.</p><p>A decision taken by one person can be challenged. A decision taken by many becomes procedural. It belongs to the meeting, the deck, the process. Not to a judgement.</p><p>This is why alignment is framed as a virtue. It sounds careful. Inclusive. Mature. The language around it is deliberately soft: <em>we&#8217;re not quite there</em>, <em>there are still concerns</em>, <em>let&#8217;s bring everyone along</em>. None of these statements are false. They are simply incomplete. What they omit is the cost of waiting.</p><p>The word <em>alignment</em> itself does part of the work. It suggests mechanics rather than choice, adjustment rather than loss. As if there were a correct axis already in place, and people merely needed time to line up with it. In reality, the axis only becomes visible once a decision has been taken and defended. Before that, there is no neutral line to align to.</p><p>This is where misalignment enters the picture, usually treated as a defect. Divergent views surface. Timelines stop matching. Interpretations multiply instead of converging. The organisation begins to look messy, unsettled, unfinished. This is the moment that triggers process: <strong>Workshops. Clarifications. Reframing. Another round.</strong></p><p>What is often happening in these moments is not dysfunction but relevance. Low-impact decisions align easily because nothing meaningful is at stake. High-impact decisions do not. They produce asymmetry: different people see different consequences, at different times, with different levels of personal risk. Expecting smooth convergence here is not realistic. It is avoidant. <br>Temporary misalignment is not an anomaly in these cases. It is a signal that the decision actually matters. Many organisations, however, lack the structural capacity to tolerate this phase. They do not have clear decision rights, or they do not trust them. They have learned to equate disagreement with breakdown. As a result, misalignment is treated as something to be eliminated as quickly as possible.<br>The emphasis quietly shifts. Not <em>what are we deciding</em> but <em>how do we reduce friction</em>. Not <em>who decides</em> but <em>how do we make this feel acceptable</em>. Comfort becomes the proxy for organisational health. This is where a subtle moral inversion takes place. Moving forward without full alignment is labelled reckless. Waiting is labelled responsible. Responsibility, here, is no longer about consequences. It is about tone. The person pushing for a decision is <em>too fast</em>. The person slowing things down is <em>thoughtful</em>. Over time, the system internalises this hierarchy.<br>The irony is that many of these organisations claim to value strong opinions and decisive leadership. In practice, they reward fluency in alignment language: the ability to speak without committing, to gesture without choosing, to remain technically active while staying substantively still.<br>Alignment then becomes retrospective. Once a direction proves safe, or at least survivable, the story rearranges itself. Objections are remembered as having been resolved. Doubts become healthy tension. The process is recalled as coherent. This retrospective alignment reassures the system that it functions. It also erases the opportunity cost of delay.</p><p>If everyone agrees before a decision, the decision probably does not matter. Choices that change something real almost always create disagreement. They produce losers, not just trade-offs. Demanding unanimity in those moments is not inclusive. It is a way of ensuring that nothing sharp happens.</p><p>What usually remains unspoken is that alignment is not evenly priced. Some roles can afford to be misaligned. Others cannot. Senior figures can survive disagreement. Peripheral ones pay for it quickly. This asymmetry shapes behaviour far more than stated values ever do. The result is an organisation that looks calm, reasonable, collaborative and moves slowly in ways that are difficult to name. Temporary misalignment is not the opposite of alignment. It is the condition under which alignment becomes meaningful rather than cosmetic. Without it, decisions flatten into gestures. With it, decisions acquire weight.</p><p>How much misalignment a system can tolerate before it fractures cannot be optimised away. It is not a framework problem. It is a judgement problem, exercised repeatedly, under pressure. And judgement, unlike alignment, cannot be crowdsourced indefinitely.</p>]]></content:encoded></item><item><title><![CDATA[Clarity that does not exonerate]]></title><description><![CDATA[How decision-making systems use clarity to avoid exposure.]]></description><link>https://www.humanandmachine.com/p/la-chiarezza-che-non-assolve</link><guid isPermaLink="false">https://www.humanandmachine.com/p/la-chiarezza-che-non-assolve</guid><dc:creator><![CDATA[Dario D’Aprile]]></dc:creator><pubDate>Thu, 15 Jan 2026 06:30:41 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!SEd_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5dac7ee8-b7b8-40eb-a786-aa3f8b2af15f_1373x900.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!SEd_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5dac7ee8-b7b8-40eb-a786-aa3f8b2af15f_1373x900.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!SEd_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5dac7ee8-b7b8-40eb-a786-aa3f8b2af15f_1373x900.jpeg 424w, https://substackcdn.com/image/fetch/$s_!SEd_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5dac7ee8-b7b8-40eb-a786-aa3f8b2af15f_1373x900.jpeg 848w, https://substackcdn.com/image/fetch/$s_!SEd_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5dac7ee8-b7b8-40eb-a786-aa3f8b2af15f_1373x900.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!SEd_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5dac7ee8-b7b8-40eb-a786-aa3f8b2af15f_1373x900.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!SEd_!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5dac7ee8-b7b8-40eb-a786-aa3f8b2af15f_1373x900.jpeg" width="1200" height="786.598689002185" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5dac7ee8-b7b8-40eb-a786-aa3f8b2af15f_1373x900.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:900,&quot;width&quot;:1373,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!SEd_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5dac7ee8-b7b8-40eb-a786-aa3f8b2af15f_1373x900.jpeg 424w, https://substackcdn.com/image/fetch/$s_!SEd_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5dac7ee8-b7b8-40eb-a786-aa3f8b2af15f_1373x900.jpeg 848w, https://substackcdn.com/image/fetch/$s_!SEd_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5dac7ee8-b7b8-40eb-a786-aa3f8b2af15f_1373x900.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!SEd_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5dac7ee8-b7b8-40eb-a786-aa3f8b2af15f_1373x900.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Luigi Ghirri, Salzburg, 1977.</figcaption></figure></div><p>A project review runs ten minutes long. The presenter lands on the final slide; the sponsor says <em>&#8220;this is very clear, thanks everyone, let&#8217;s keep the momentum.&#8221;</em> The meeting closes on time-ish. On slide fourteen, a vendor dependency that would block the critical path in week six is shaded green. No one in the room has said the dependency is green; no one has said it is anything else; the room has moved.</p><p>Watch that phrase <em>&#8220;this is very clear.&#8221;</em> It arrives late in the meeting; it does the work of closing without asking anyone to state which version of the plan they have just endorsed. It is not dishonest; it is something stranger. A piece of language that keeps the group in motion while leaving the authorship of the decision unassigned.</p><p>Most people who reach for it mean something real by it. Meetings have to end. Teams have to leave the room with something that functions as a direction; another session is beginning in eleven minutes; the sponsor has five more of these today. The alternative to <em>&#8220;this is very clear&#8221;</em> is often a twenty-minute conversation about which risk the group is actually willing to carry, and that conversation, done properly, places one person at the head of the table in a position the table was not assembled to contain. Clarity, as a piece of language, is what allows the meeting to operate at the tempo the organisation needs it to operate at. Removing it or replacing it with something slower is not free.</p><p>This is the part that has to be granted before anything else. People who lean on <em>&#8220;this is very clear&#8221;</em> are not, in their own account, performing avoidance. They are solving a genuine coordination problem under time pressure. The problem is real and the phrase works on it. The cost, when it eventually arrives, arrives elsewhere and later, which is why the speaker rarely connects the phrase to the cost.</p><p>Certainty sits differently in the mouth. To be certain is to name that a choice has been made; that alternatives which were available a moment ago have been set aside by a specific person at a specific hour. Certainty does not circulate easily inside a meeting because it leaves a residue on the person who produces it. The speaker who says <em>&#8220;yes, I am choosing this, and if the vendor slips we absorb the date&#8221;</em> has, briefly, stepped out of the collective; the room reads the step-out; the reading is not always generous.</p><p>This is where <em>clear</em> begins to do a different kind of work than the word suggests. Clarity, in a system under pressure, does not refer to the visibility of the decision. It refers to the visibility of the process by which the decision was reached. More data; more scenarios; more dashboards; more alignment meetings. The process becomes legible; the decision itself becomes, through exactly the same motion, harder to locate. Data exonerates because it is shared. No one quite owns it; no one is obliged to account for what has been decided on the basis of it. Clarity, in this register, is a distribution mechanism for the exposure the decision would otherwise place on a specific person; the decision itself is no longer where the word points.</p><h3>Where the word drifts</h3><p>The vocabulary shifts under repetition. <em>Clear</em> begins to name a person rather than a statement. Someone who is clear becomes someone who is reliable; someone who is reliable becomes someone who is ready. The drift is quiet. The speaker does not notice it; the receivers notice it instantly. They begin producing the social signal of clarity whether or not the underlying judgement has been exercised. Crisp slides and confident voicing; a recommendation delivered in the form the meeting expects to receive it in. The organisation learns that clarity is a surface to be maintained. The judgement that would have been exercised, in the older sense of the word, migrates into private calculations that appear on no document.</p><p>Two closings of the same meeting. In the first, the sponsor says <em>&#8220;this is very clear, let&#8217;s keep the momentum.&#8221;</em> The team leaves. The vendor dependency is carried home by the product lead, who rebuilds the plan over the weekend in a way that preserves the green dot without quite defending it. The following Tuesday, when the dependency slips, the sponsor asks who made the call to hold the timeline. The answer distributes across four people, none of whom chose it.</p><p>In the second, the sponsor closes the laptop and says <em>&#8220;before we move, slide fourteen. Is the green dot a judgement you are making or is it the number the vendor gave you?&#8221;</em> The presenter either has the answer or does not. The room stills in a way that feels different. If the answer is not there, the absence becomes visible without anyone having to perform disappointment. The dependency surfaces. The meeting runs five minutes over. The following Tuesday, when the dependency slips, the call is traceable to the person who made it.</p><p>The second sponsor has not been harsher than the first. The force of the question is similar. The question has been pointed at the green dot rather than at the presenter, which is a different grammar. What the second sponsor has refused to do is allow the meeting to close on the grammar of clarity when the underlying act being skipped was a grammar of certainty.</p><h3>What the process is avoiding</h3><p>Paralysis, inside systems organised this way, does not look like paralysis. It looks like a next step. More input is needed; a further round of alignment would sharpen the picture; the dependency can be re-scoped in the next planning cycle. The language of clarity provides the vocabulary for postponement. Postponement is not registered as postponement because the next step is always reasonable. Reality does not adapt itself to process. Opportunities expire and alternatives disappear; constraints harden into facts that can no longer be chosen against. When action is finally taken, it arrives accompanied by a very clean narrative; the narrative rarely mentions that the decision, by the time it was made, had fewer choices available to it than it would have had three weeks earlier.</p><p>The residue is familiar to anyone who has sat through enough of these meetings. A decision is made and the document explaining it is well assembled. The information is complete; nothing was missed; something that should have been crossed has not been. The feeling is hard to name in the moment because the system is functioning. The system functions; it learns very little.</p><p>None of this is a failure of clarity. Clarity is a powerful instrument; organisations that lack it fail in more visible and more expensive ways. The question is a smaller one. It is whether, in the moment the phrase <em>&#8220;this is very clear&#8221;</em> arrives in the room, the speaker is using the word to describe a decision that has been made or to keep open the space in which the decision does not have to be made by anyone in particular.</p><p>The meetings continue. The decks become more assembled over time and the language tightens with them. The exposure, which is what certainty would have placed somewhere specific, stays where it was. Distributed across the surface of the organisation and held by no single person whose name will be asked for. Orderly and well explained. Only slightly out of time with what is actually happening.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7NRv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57728140-548b-4074-be30-2e719330b8c6_1280x720.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7NRv!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57728140-548b-4074-be30-2e719330b8c6_1280x720.jpeg 424w, https://substackcdn.com/image/fetch/$s_!7NRv!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57728140-548b-4074-be30-2e719330b8c6_1280x720.jpeg 848w, https://substackcdn.com/image/fetch/$s_!7NRv!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57728140-548b-4074-be30-2e719330b8c6_1280x720.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!7NRv!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57728140-548b-4074-be30-2e719330b8c6_1280x720.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7NRv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57728140-548b-4074-be30-2e719330b8c6_1280x720.jpeg" width="724.375" height="407.4609375" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/57728140-548b-4074-be30-2e719330b8c6_1280x720.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:720,&quot;width&quot;:1280,&quot;resizeWidth&quot;:724.375,&quot;bytes&quot;:363913,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!7NRv!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57728140-548b-4074-be30-2e719330b8c6_1280x720.jpeg 424w, https://substackcdn.com/image/fetch/$s_!7NRv!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57728140-548b-4074-be30-2e719330b8c6_1280x720.jpeg 848w, https://substackcdn.com/image/fetch/$s_!7NRv!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57728140-548b-4074-be30-2e719330b8c6_1280x720.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!7NRv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57728140-548b-4074-be30-2e719330b8c6_1280x720.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Luigi Ghirri &#8211; Atlante</figcaption></figure></div><div><hr></div><p>Human &amp; Machine studies how judgement fails under complexity. This piece is part of that work. Does this pattern feel familiar? You can explore it further through the <a href="https://www.humanandmachine.com/p/decision-audit">Decision Audit</a>.</p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Notes on the confidence trap]]></title><description><![CDATA[How fluent AI is systematically destroying my ability to think critically and why I won't notice until it's too late.]]></description><link>https://www.humanandmachine.com/p/notes-on-the-confidence-trap</link><guid isPermaLink="false">https://www.humanandmachine.com/p/notes-on-the-confidence-trap</guid><dc:creator><![CDATA[Dario D’Aprile]]></dc:creator><pubDate>Tue, 30 Dec 2025 10:05:29 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!WPPf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3250b0c5-690f-46a2-853c-f273d3fda411_1024x608.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!WPPf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3250b0c5-690f-46a2-853c-f273d3fda411_1024x608.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!WPPf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3250b0c5-690f-46a2-853c-f273d3fda411_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!WPPf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3250b0c5-690f-46a2-853c-f273d3fda411_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!WPPf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3250b0c5-690f-46a2-853c-f273d3fda411_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!WPPf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3250b0c5-690f-46a2-853c-f273d3fda411_1024x608.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!WPPf!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3250b0c5-690f-46a2-853c-f273d3fda411_1024x608.png" width="1200" height="712.5" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3250b0c5-690f-46a2-853c-f273d3fda411_1024x608.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:608,&quot;width&quot;:1024,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!WPPf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3250b0c5-690f-46a2-853c-f273d3fda411_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!WPPf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3250b0c5-690f-46a2-853c-f273d3fda411_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!WPPf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3250b0c5-690f-46a2-853c-f273d3fda411_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!WPPf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3250b0c5-690f-46a2-853c-f273d3fda411_1024x608.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">AI generated image, training dataset unknown.</figcaption></figure></div><div><hr></div><p>Back in September 2025, I was in a chilled conference room, the kind with the hermetically sealed windows and the aggressively neutral art, watching at product leadership team about to sign off on a portfolio strategy bet.</p><p>The presentation? Honestly, it was a work of art. Beautifully kerning on the slides, a market analysis that felt exhaustive, technical specs that seemed to pre-empt every hesitation I had. It was hypnotic.</p><p>I asked exactly one question: &#8220;<strong>What assumptions would need to be false for this to be a disaster?</strong>&#8221;</p><p>Silence. Just the hum of the ventilation.</p><p>It turned out the entire business case was standing on three incredibly optimistic legs. But the Large Language Model (LLM) they used had generated prose so buttery smooth, so undeniably <em>fluent</em>, that questioning it felt like admitting you were the only idiot in the room who didn&#8217;t get it. We caught it. This time. But most organizations? They won&#8217;t.</p><p>And that&#8217;s when the cold realization washed over me. The biggest risk with AI right now isn&#8217;t that it hallucinates or that it&#8217;s biased, or that it breaks a law. It&#8217;s that conversational <strong>AI is systematically dismantling our ability to think critically and we are paying a monthly subscription for the privilege.</strong></p><p>I&#8217;ve spent twenty years scaling products to billions of users. I&#8217;ve seen tech turn industries inside out. But I have never seen executive teams make such confidently terrible decisions based on such shallow analysis, all while patting themselves on the back for being &#8220;data-driven&#8221;.</p><p>This isn&#8217;t about AI being wrong. It&#8217;s about AI making us feel certain when we should be terrified.</p><h3>The cliff we can&#8217;t see</h3><p>I&#8217;ve been reflecting over this concept called &#8220;<a href="https://youtu.be/ERiM8L9EY4g?si=6ybLMhA6qKONdAkJ">death by GPS</a>&#8221;, it&#8217;s not a metaphor. Years before ChatGPT, researchers were documenting actual drivers who stared at a cliff, heard their GPS say &#8220;drive forward,&#8221; and <em>drove forward</em>.</p><p>They overrode their own eyes. They overrode the environment. The screen said go, so they went.</p><p>It&#8217;s called automation bias. Goddard, Roudsari and Wyatt in <strong>Automation bias: a systematic review of frequency, effect mediators and mitigators</strong> found that humans consistently defer to algorithmic recommendations even when those recommendations violently conflict with their own professional judgment.</p><p>We are seeing this happen in Strategy, Product, Ops, Finance. All at once. But unlike the car over the cliff, in business, you don&#8217;t hit the ground for eighteen months. You&#8217;re falling, but you feel like you&#8217;re flying.</p><h3>The seduction of fluency</h3><p>I need to articulate exactly <em>how</em> this is breaking my brain, because I feel it happening.</p><p>Fluent answers feel true. It&#8217;s a hack in our cognitive firmware. Think about how we used to search. Google gave us ten blue links. We had to click, read, evaluate, stitch it together. We saw the seams. We saw the disagreement. LLMs hide the seams. You get this clean, authoritative, uninterrupted narrative. No dissenting view. No &#8220;confidence interval&#8221;. Just text.</p><p>Research by <a href="https://scholar.google.com/citations?user=t3YpYOwAAAAJ&amp;hl=en">Markus Bink</a> showed that how an interface <em>looks</em> shapes our credibility judgments more than the actual source quality; <a href="https://sparktoro.com/blog/new-research-google-search-grew-20-in-2024-receives-373x-more-searches-than-chatgpt/">research</a> on <strong>Google Search Growth </strong>shows that the majority of searches now end without a click. We are consuming conclusions without ever glancing at the evidence.</p><p>We spent twenty years optimizing our products for &#8220;frictionless&#8221; experiences. Well, congratulations to us. We optimized away epistemic caution.</p><h3>Five ways I&#8217;m losing my mind (and my judgment)</h3><p>I&#8217;m watching this happen in real-time.</p><ol><li><p><strong>The Theater of Reasoning:</strong> You know that token streaming? The way the cursor blinks and words appear one by one? It mimics human thought. Your brain sees that and thinks, <em>&#8220;Wow, it&#8217;s working hard&#8221;.</em> It&#8217;s not. It&#8217;s theatre. Those little delays make the system feel thoughtful. I&#8217;ve watched senior VPs wait for a three-second pause and say, &#8220;See? It really crunched the numbers there&#8221;. The pause meant nothing. The perception meant everything.</p></li><li><p><strong>The Death of Uncertainty:</strong> Real experts say &#8220;it depends&#8221;. They hedge. AI optimizes hedges away because they look weak. Back in 2005, Giles published in <em>Nature</em>, and Chesney in <em>First Monday</em> (2006), showing that people overestimate the completeness of Wikipedia because they never see the &#8220;Talk&#8221; pages where the editors are fighting. LLMs are Wikipedia without the Talk pages. All reliability, no visible struggle.</p></li><li><p><strong>The Mirror Trap:</strong> The system uses my words back to me. It frames the answer using my assumptions. I feel understood, so I trust the output. It&#8217;s just pattern matching, but it feels like alignment. I fall for this constantly.</p></li><li><p><strong>Narrative Coherence:</strong> Human experts stumble. They backtrack. That signals they are thinking. AI prose is a superhighway: smooth, straight, confident. I ran a test with my product teams: same analysis, one presented by a stumbling human, one by a smooth LLM. They rated the LLM as &#8220;more thorough&#8221; every time. The content was identical.</p></li><li><p><strong>The Confidence Loop:</strong> AI is right often enough that we get lazy. Freibauer&#8217;s research in the <em>Journal of Behavioral Finance</em> (2024) on trading apps is the perfect parallel. Simplified interfaces on apps like Robinhood make users gain confidence faster than they gain skill. They trade more, lose more and don&#8217;t learn. We are doing this with corporate strategy. We are day-trading our future with unearned confidence.</p></li></ol><h3>The Stack Overflow effect</h3><p>It&#8217;s the &#8220;copy-paste&#8221; culture applied to thinking. Rahman (Mining Software Repositories, 2019) and Jallow (Empirical Software Engineering, 2020) found that insecure code snippets on Stack Overflow get upvoted, copied and spread into thousands of projects. Developers copy what &#8220;works&#8221; without knowing <em>why</em>.</p><p>Now, imagine that but for legal reasoning. For health decisions. For layoffs. We are outsourcing the judgment of &#8220;does this solve the right problem?&#8221; to a machine that doesn&#8217;t know what a problem is.</p><h3>If doctors can&#8217;t resist, can I?</h3><p>Different systematic reviews show that clinicians defer to algorithms even when they are wrong. These are people with medical degrees. Life and death stakes. And they still cave to the screen, especially under time pressure.</p><p>Now, take that deference and apply it to a 24-year-old Product Manager with a deadline. There is no regulator for corporate strategy. We are running a massive, uncontrolled experiment on our collective decision-making ability.</p><h3>The design decisions we ignore</h3><p>The &#8220;Progressive Token Streaming&#8221; and the &#8220;Good question!&#8221; validation are design choices, not technical requirements, to increase engagement and trust; they are  designed to bypass our critical faculties. Compliance teams are out there auditing training data for bias, which is fine, but they should also be auditing the interface for <em>hypnosis</em>.</p><h3>What do I actually do?</h3><p>I need to stop spiraling and start acting. Here is my plan. I need to force myself to do this.</p><p><strong>1. Audit My Own Epistemics.</strong> I need to look at the last five big decisions I made. Did I use AI? Did I accept the framing? I need to find the decisions where my confidence was a 9/10 but the evidence was a 4/10. Those are the landmines.</p><p><strong>2. Institutionalize the &#8220;Red Team&#8221;.</strong> Every AI recommendation needs a human whose specific job is to destroy it. &#8220;How could this be wrong?&#8221; &#8220;What evidence contradicts this?&#8221; I need to build a process where dissent is mandatory, because the fluency of the AI suppresses natural disagreement.</p><p><strong>3. Demand Uncertainty Bars.</strong> I am banning the period at the end of a strategic sentence. No more &#8220;We should launch in Q4&#8221;. It has to be &#8220;We should launch in Q4, assuming interest rates hold, with low confidence in our supply chain assumption&#8221;. I need to see the math.</p><p><strong>4. Competing AIs.</strong> Never trust one model. If Claude says X, see what GPT says. If they agree, fine. If they disagree, that gap is where the actual thinking happens.</p><p><strong>5. Decision Hygiene.</strong> I need a checklist. Not because I&#8217;m dumb, but because I&#8217;m human. &#8220;Have we challenged the prompt&#8217;s assumptions?&#8221; &#8220;Have we looked for disconfirming evidence?&#8221;</p><h3>My expertise is my vulnerability</h3><p>My experience makes me <em>more</em> vulnerable, not less. My bullshit detector was trained on humans. Humans have &#8220;tells&#8221; when they are lying or unsure. They fidget. They use weasel words. AI has no tells. It lies with the same calm conviction as it tells the truth. My heuristics are obsolete. Research shows experience mitigates automation bias but it doesn&#8217;t cure it. I am not immune.</p><h3>The choice</h3><p>I have to choose. I can keep using AI to feel smart, fast and decisive. It feels great. Truly. The dopamine hit of a perfect strategy document generated in seconds is intoxicating. Or, I can use AI to make myself humble. To slow down. To force friction back into the process. The organizations that win won&#8217;t be the ones that move the fastest. They will be the ones that realize that <strong>&#8220;fluency&#8221; is not &#8220;truth&#8221;.</strong></p><p>Six months from now, will I be thinking more rigorously or will I just be hallucinating more confidently? I need to answer that. Today.</p>]]></content:encoded></item><item><title><![CDATA[No Shortcut to Yourself]]></title><description><![CDATA[Part of the new series: Human Before Machine]]></description><link>https://www.humanandmachine.com/p/no-shortcut-to-yourself</link><guid isPermaLink="false">https://www.humanandmachine.com/p/no-shortcut-to-yourself</guid><dc:creator><![CDATA[Dario D’Aprile]]></dc:creator><pubDate>Fri, 31 Oct 2025 19:08:39 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!qH3T!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96462573-b53c-481f-b529-2f0a93963f41_1024x608.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qH3T!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96462573-b53c-481f-b529-2f0a93963f41_1024x608.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qH3T!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96462573-b53c-481f-b529-2f0a93963f41_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!qH3T!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96462573-b53c-481f-b529-2f0a93963f41_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!qH3T!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96462573-b53c-481f-b529-2f0a93963f41_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!qH3T!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96462573-b53c-481f-b529-2f0a93963f41_1024x608.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qH3T!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96462573-b53c-481f-b529-2f0a93963f41_1024x608.png" width="1200" height="712.5" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/96462573-b53c-481f-b529-2f0a93963f41_1024x608.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:608,&quot;width&quot;:1024,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!qH3T!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96462573-b53c-481f-b529-2f0a93963f41_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!qH3T!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96462573-b53c-481f-b529-2f0a93963f41_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!qH3T!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96462573-b53c-481f-b529-2f0a93963f41_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!qH3T!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96462573-b53c-481f-b529-2f0a93963f41_1024x608.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">AI generated image: training dataset unknown</figcaption></figure></div><div><hr></div><p>This post is part of a new series on <em>Human &amp; Machine</em> called <strong>Human Before Machine</strong>: a space to explore the internal work behind meaningful leadership. You won&#8217;t find AI takes here. Or frameworks. Instead, I&#8217;ll be writing about the things that shape how we lead, build and relate: identity, clarity, transitions, discomfort and all the messy, human contradictions we carry with us. After years of working in tech and coaching talents, I&#8217;ve come to believe the hardest problems aren&#8217;t technical. They&#8217;re personal. This series is where I unpack them.</p><div><hr></div><p>In 2014, I was in a bus at 6:45 in the morning, heading to a gym I didn&#8217;t like, trying to optimise a version of my life I didn&#8217;t really understand. The sun was rising in that clinical way it sometimes does in London: sharp, fluorescent almost, a kind of light that doesn&#8217;t inspire but exposes. I had woken up early, part of a new routine I&#8217;d committed to, one I&#8217;d read about in a podcast summary or a blog post somewhere: early workouts, clean eating, no phone until noon. It felt like the right thing to do, or at least, the next thing to try. I told myself it was time to reset, rebuild, start over. Again. This was maybe the sixth time in a year.</p><p>The plan lasted four days. On the fifth morning, I didn&#8217;t hear the alarm. Or maybe I did and just let it ring until it stopped. Either way, I stayed in bed. The motivation had evaporated, as it always did, leaving behind a quiet sense of shame and the familiar thought: maybe next Monday. This pattern wasn&#8217;t new. I had stacks of half-read self-help books on my bedside table and an embarrassing number of productivity apps installed on my phone. I&#8217;d tried meditating, journaling, scheduling my day down to the minute. Each time, I thought this would be the thing that finally got me on track. And each time, it wasn&#8217;t.</p><p>Back then, I believed I had a focus problem. I thought I lacked discipline. But now, with some distance and experience, I see it more clearly: I wasn&#8217;t fixing my focus: I was avoiding my confusion. I didn&#8217;t know what I really wanted. And instead of facing that truth, I kept trying to fix the symptoms. I was optimising the surface while avoiding the foundation.</p><p>Years later, I began seeing the same pattern in others. Founders. Product leaders. Executives. Smart, driven people who would sit across from me and say, &#8220;I can&#8217;t focus.&#8221; They&#8217;d talk about distraction, about struggling to manage their time, about not feeling productive anymore. They wanted advice: better tools, better systems, maybe a new framework to try. But what they really wanted was relief. Relief from the discomfort of not knowing what to do next. Relief from having to face the gap between where they were and where they thought they should be.</p><p>There was a founder I worked with who had every reason to be confident. He had traction, funding, a team that admired him. But when it came time to make a real product decision, he stalled. He said he needed more leverage, a chief of staff, maybe help with prioritisation. But the truth was simpler and harder: his product had three possible futures and he hadn&#8217;t chosen one. Choosing meant saying no to the other two. It meant taking a risk, exposing himself to the possibility of being wrong. So instead, he stayed busy. He buried himself in meetings, strategy decks, hiring plans; anything that looked like leadership but didn&#8217;t require commitment. And the longer he avoided the decision, the more chaotic everything around him became. His team felt the indecision. The roadmap slipped. Morale dipped&#8230; and still, he thought the solution was more structure, not more clarity.</p><p>Another time, I worked with a Chief Product Officer at a high-growth company. Her calendar was a wall of meetings, her team was disengaged and she said she couldn&#8217;t focus. But she didn&#8217;t have a focus issue. She had a trust issue. She didn&#8217;t trust her team to make decisions without her, so she involved herself in everything. She reviewed every doc, sat in every meeting, replied to every message. She was stretched too thin, constantly exhausted, but unwilling to let go. The cost of control was her clarity. And no productivity app was going to fix that.</p><p>I&#8217;ve lived my own versions of these stories. When I first stepped into leadership, I was used to being the person who delivered. I got things done. But suddenly, my job wasn&#8217;t to do: it was to guide, to shape, to create space for others. I didn&#8217;t know how to measure myself anymore. I didn&#8217;t know what success looked like. So I did what felt safe: I stayed close to the work. I reviewed everything, overprepared for meetings, rewrote strategy documents that didn&#8217;t need rewriting. I filled my days with motion, trying to prove I still mattered. It took me months to realise I wasn&#8217;t leading. I was hiding. And the thing I was hiding from was the uncertainty of not knowing who I was in this new role.</p><p>That&#8217;s the part most advice skips over: the uncertainty. The fear. We&#8217;re taught to optimise, to improve, to fix. But the truth is, many of us aren&#8217;t broken. We&#8217;re just lost. And instead of slowing down to figure out where we are, we speed up, hoping that if we move fast enough, the doubt won&#8217;t catch us.</p><p>But it always does.</p><p>What I&#8217;ve learned is this: most focus problems aren&#8217;t about attention. They&#8217;re about avoidance. The real question isn&#8217;t &#8220;How do I get more done?&#8221; It&#8217;s &#8220;<strong>What am I avoiding?</strong>&#8221; &#8220;<strong>What decision am I afraid to make?&#8221;</strong> &#8220;<strong>What truth am I unwilling to admit?&#8221;</strong></p><p>Focus doesn&#8217;t come from better tactics. It comes from alignment. When your actions match your intent. When you&#8217;re not pretending. When you&#8217;re not performing a version of yourself for someone else&#8217;s approval.</p><p>I never went back to that gym. I hated it. The lights were too bright. The music too loud. It wasn&#8217;t a place where I felt like myself. Now, I run. Sometimes with music. Sometimes without. Not because I&#8217;m trying to optimise anything. But because it&#8217;s what makes sense to me now. It fits.</p><p>And maybe that&#8217;s the real work: not building a life that needs to be managed but building one that makes sense. One that doesn&#8217;t require constant effort to sustain. One that doesn&#8217;t fall apart the moment your motivation dips.</p><p>There&#8217;s no shortcut to that. No hack. No app. Just the slow, sometimes painful process of telling yourself the truth and seeing what remains once you stop pretending.</p><p>That&#8217;s where focus begins. Not with discipline. But with honesty.</p>]]></content:encoded></item><item><title><![CDATA[Session #1: Living in an Easy World]]></title><description><![CDATA[A Coach's perspective on when technology promises freedom but delivers new forms of prison]]></description><link>https://www.humanandmachine.com/p/session-1-living-in-an-easy-world</link><guid isPermaLink="false">https://www.humanandmachine.com/p/session-1-living-in-an-easy-world</guid><dc:creator><![CDATA[Dario D’Aprile]]></dc:creator><pubDate>Wed, 17 Sep 2025 16:34:59 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!MsEN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbe1b8b9-53cf-4c90-99c8-762938e84fec_1024x608.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><div><hr></div><p><em><strong>Note</strong>: This article is written in the format of coaching note to reflect on how technology is changing the workplace and society in general. The client "Michael" is a fictional character, any resemblance to real individuals is purely coincidental.</em></p><div><hr></div><p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MsEN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbe1b8b9-53cf-4c90-99c8-762938e84fec_1024x608.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MsEN!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbe1b8b9-53cf-4c90-99c8-762938e84fec_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!MsEN!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbe1b8b9-53cf-4c90-99c8-762938e84fec_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!MsEN!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbe1b8b9-53cf-4c90-99c8-762938e84fec_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!MsEN!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbe1b8b9-53cf-4c90-99c8-762938e84fec_1024x608.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MsEN!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbe1b8b9-53cf-4c90-99c8-762938e84fec_1024x608.png" width="1200" height="712.5" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cbe1b8b9-53cf-4c90-99c8-762938e84fec_1024x608.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:608,&quot;width&quot;:1024,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MsEN!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbe1b8b9-53cf-4c90-99c8-762938e84fec_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!MsEN!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbe1b8b9-53cf-4c90-99c8-762938e84fec_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!MsEN!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbe1b8b9-53cf-4c90-99c8-762938e84fec_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!MsEN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbe1b8b9-53cf-4c90-99c8-762938e84fec_1024x608.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><p>Michael arrived fifteen minutes late, apologising profusely about back-to-back calls.</p><p>When I asked him to pause and breathe, he immediately launched into his familiar refrain:</p><blockquote><p><em>&#8220;I wake up most mornings with the same feeling: this wall of tasks already pressing in before the day has started. My calendar is packed, my inbox overflowing, my head already rehearsing conversations I&#8217;ll never have time for.&#8221;</em></p></blockquote><p>I let him continue, noting the physical tension in his shoulders as he described feeling <em>&#8220;heavy, relentless, endless.&#8221;</em> Then I offered a simple reflection:</p><blockquote><p><em>&#8220;You&#8217;re describing the hardest life imaginable, yet you work in technology. Help me understand this contradiction.&#8221;</em></p></blockquote><p>His whole posture shifted. A long pause. Then:</p><blockquote><p><em>&#8220;This is actually the easiest world humanity has ever created, isn&#8217;t it? I can press a button and have a machine generate ideas, summaries, code, strategies. I can send messages across continents in seconds. Everything is faster, cheaper, smoother.&#8221;</em></p></blockquote><p>Another pause.</p><blockquote><p><em>&#8220;And I still feel crushed.&#8221;</em></p></blockquote><div><hr></div><h3><strong>Coaching Note</strong></h3><p><em>First breakthrough moment: recognising the paradox rather than staying stuck in the complaint.</em></p><div><hr></div><p>When I asked what <em>&#8220;being busy&#8221;</em> meant to him, Michael&#8217;s response was immediate:</p><blockquote><p><em>&#8220;It&#8217;s my shield. If I&#8217;m busy, then I&#8217;m important. If I&#8217;m busy, nobody can accuse me of wasting time.&#8221;</em></p></blockquote><p>But as we explored this deeper, cracks appeared in his certainty.</p><blockquote><p><em>&#8220;Actually, much of that busyness is theatre. I automate one process only to invent another. AI writes a draft and instead of using the gift of time to think, I decide I should produce more versions.&#8221;</em></p></blockquote><p>He described last Tuesday&#8217;s client proposal:</p><blockquote><p><em>&#8220;ChatGPT delivered something better than I could have written in twenty minutes. It should have taken me three hours. I should have felt victorious.&#8221;</em></p></blockquote><p>His voice dropped.</p><blockquote><p><em>&#8220;Instead, I spent the next two hours editing, second-guessing, adding unnecessary sections. By evening, Sarah found me still at my desk, tweaking fonts. When she asked what I was doing, I couldn&#8217;t give her an honest answer.&#8221;</em></p></blockquote><p>The silence that followed was heavy with recognition.</p><div><hr></div><h3><strong>Coaching Note</strong></h3><p><em>Client becoming aware of self-sabotaging patterns: creating busy work to avoid discomfort of efficiency.</em></p><div><hr></div><p>I asked Michael about his relationship with the speed of modern communication. His answer revealed a deeper pattern:</p><blockquote><p><em>&#8220;The faster I reply to emails, the more replies I get. The easier it is to create content, the more content is expected. Progress just sets a new baseline. What was miraculous yesterday is invisible today.&#8221;</em></p></blockquote><p>He shared a story that clearly still bothered him:</p><blockquote><p><em>&#8220;I was at dinner with my brother. He runs a small restaurant, works sixteen-hour days, real staffing problems. While he&#8217;s telling me about his struggles, I&#8217;m secretly asking my phone to draft responses to three clients. Within minutes, I had professional replies ready. I should have put the phone away and been present with him. Instead, I felt this compulsive need to send them immediately.&#8221;</em></p></blockquote><div><hr></div><h3><strong>Coaching Note</strong></h3><p><em>Technology enabling disconnection from relationships whilst creating illusion of productivity.</em></p><div><hr></div><p>The conversation turned to choice paralysis. Michael described spending three hours yesterday watching YouTube videos about prompt engineering:</p><blockquote><p><em>&#8220;I was convinced I was behind, that everyone else had figured out some secret I was missing. I took notes, bookmarked articles, added to my ever-growing learning backlog.&#8221;</em></p></blockquote><p>Then his voice changed completely:</p><blockquote><p><em>&#8220;My daughter knocked on my office door asking if I wanted to see the fort she&#8217;d built. I told her &#8216;just five more minutes.&#8217; She waited. I never came. When I finally emerged two hours later, she was watching TV. The blankets were folded and put away. The fort was gone.&#8221;</em></p></blockquote><p>The pain in his voice was unmistakable.</p><div><hr></div><h3><strong>Coaching Note</strong></h3><p><em>The cost of endless optimisation: missing irreplaceable moments with family.</em></p><div><hr></div><p>When I asked about his relationship with accomplishment, Michael revealed a fascinating conflict:</p><blockquote><p><em>&#8220;When AI does something impressive, drafts an article, designs a process, I feel this flush of mastery. For a moment my competence has grown. Then the doubt creeps in. Was it me, or the machine? Am I really understanding or just performing understanding?</em></p><p><em>That gap unsettles me more than ignorance ever did. At least ignorance was honest.&#8221;</em></p></blockquote><div><hr></div><h3><strong>Coaching Note</strong></h3><p><em>Identity crisis around competence and authenticity in an AI-augmented world.</em></p><div><hr></div><p>I asked Michael to reflect on what had changed in his life. His response was profound:</p><blockquote><p><em>&#8220;In the past I could blame the system: not enough money, access, connections. Now those excuses are gone. What remains is me. If I don&#8217;t act, it&#8217;s not because I cannot. It&#8217;s because I chose not to.&#8221;</em></p></blockquote><p>He described a recent example:</p><blockquote><p><em>&#8220;Last month, I automated my entire invoice processing system. What used to take half a day now happens in minutes. I should have used those hours to work on the book I&#8217;ve been talking about writing for three years.&#8221;</em></p></blockquote><p>His voice got quieter.</p><blockquote><p><em>&#8220;Instead, I found myself refreshing Twitter, reading newsletters, having &#8216;strategic&#8217; coffee meetings that led nowhere.&#8221;</em></p></blockquote><p>At dinner with Sarah, he admitted:</p><blockquote><p><em>&#8220;She asked about the book. I gave her the same excuse: &#8216;I&#8217;m just so swamped right now.&#8217; The lie sat heavy between us. We both knew I had more time than ever. I just didn&#8217;t have the courage to face the blank page.&#8221;</em></p></blockquote><div><hr></div><h3><strong>Coaching Note</strong></h3><p><em>Client recognising how he&#8217;s using technology to avoid rather than enable meaningful work.</em></p><div><hr></div><p>Near the end of our session, I asked Michael what he&#8217;d learnt about himself.</p><p>After a long pause, he said:</p><blockquote><p><em>&#8220;I tell myself stories. That the inbox matters. That the meetings matter. That producing more documents proves my value. But what really matters is simpler: the courage to focus, the willingness to decide, the ability to act without waiting for permission.&#8221;</em></p></blockquote><p>Then he surprised me:</p><blockquote><p><em>&#8220;This morning, before our session, I deleted seventeen productivity apps from my phone. I closed twelve browser tabs about AI optimisation. I cancelled two meetings that were really just elaborate ways of avoiding real work.</em></p><p><em>It felt terrifying and liberating.&#8221;</em></p></blockquote><div><hr></div><h3><strong>Coaching Note</strong></h3><p><em>Spontaneous action before session suggests internal shift already beginning.</em></p><div><hr></div><h2><strong>Session Summary</strong></h2><p>Michael is experiencing what I&#8217;m seeing with many high-performing professionals: the paradox of living in an <em>&#8220;easy world&#8221;</em> that somehow feels impossibly hard.</p><p>The external barriers to productivity and achievement have largely disappeared, leaving him face-to-face with the internal barriers he could previously avoid confronting.</p><p><strong>Key insights that emerged:</strong></p><ul><li><p>Busyness as performance and avoidance rather than productivity</p></li><li><p>Technology enabling disconnection from relationships and meaningful work</p></li><li><p>Identity confusion around human versus AI contribution</p></li><li><p>Choice paralysis disguised as learning and optimisation</p></li><li><p>Using automation to create more busy work rather than space for important work</p></li></ul><div><hr></div><h2><strong>Next Session Goals</strong></h2><ul><li><p>Explore what <em>&#8220;meaningful work&#8221;</em> looks like for Michael specifically</p></li><li><p>Develop practices for distinguishing between productive and performative activity</p></li><li><p>Create boundaries around family time that honour his values</p></li><li><p>Address the deeper fear of not being needed or relevant in an AI world</p></li></ul><div><hr></div><h3><strong>Final Note</strong></h3><p>Michael has rescheduled our next session twice: avoidance behaviour, but his deletion of apps suggests readiness for change beneath the resistance.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.humanandmachine.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Human and Machine! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Fitness Beats Forecasts (4 of 4)]]></title><description><![CDATA[In an unpredictable future, adaptability matters more than prediction.]]></description><link>https://www.humanandmachine.com/p/fitness-beats-forecasts-4-of-4</link><guid isPermaLink="false">https://www.humanandmachine.com/p/fitness-beats-forecasts-4-of-4</guid><pubDate>Thu, 28 Aug 2025 12:10:16 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ir2e!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb230daaa-7847-457d-8fc6-1ee72af2e68d_1024x608.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This post is part of a four-part series: <strong>Thinking Clearly About AI</strong>. The series doesn&#8217;t try to predict the future. It looks at patterns from past disruptions and asks better questions about the present. Each post explores one idea.</em></p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ir2e!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb230daaa-7847-457d-8fc6-1ee72af2e68d_1024x608.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ir2e!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb230daaa-7847-457d-8fc6-1ee72af2e68d_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!ir2e!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb230daaa-7847-457d-8fc6-1ee72af2e68d_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!ir2e!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb230daaa-7847-457d-8fc6-1ee72af2e68d_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!ir2e!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb230daaa-7847-457d-8fc6-1ee72af2e68d_1024x608.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ir2e!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb230daaa-7847-457d-8fc6-1ee72af2e68d_1024x608.png" width="725.2109375" height="430.593994140625" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b230daaa-7847-457d-8fc6-1ee72af2e68d_1024x608.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:608,&quot;width&quot;:1024,&quot;resizeWidth&quot;:725.2109375,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ir2e!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb230daaa-7847-457d-8fc6-1ee72af2e68d_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!ir2e!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb230daaa-7847-457d-8fc6-1ee72af2e68d_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!ir2e!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb230daaa-7847-457d-8fc6-1ee72af2e68d_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!ir2e!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb230daaa-7847-457d-8fc6-1ee72af2e68d_1024x608.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><h2>The Comfort of Predictions</h2><p>Executives love forecasts. Slide decks bristle with growth curves, market penetration estimates and ROI models. Forecasts make uncertainty feel controllable.</p><p>The problem is: forecasts are almost always wrong.</p><p>COVID-19 is a reminder. Few firms predicted a global supply chain freeze, a collapse in travel or a vaccine developed in under a year. Yet some firms thrived. Not because they predicted the shock but because they were fit enough to adapt when it hit.</p><p>AI will be no different. The future of large language models, regulation and industry structures is unknowable. What matters is not foresight but <em>fitness</em>: the capacity to adapt when reality diverges from predictions.</p><div><hr></div><h2>What Fitness Means</h2><p>Fitness is not about muscles. It&#8217;s about capacity. An organisation with fitness can:</p><ol><li><p><strong>Sense change early</strong> &#8211; picking up weak signals before competitors.</p></li><li><p><strong>Experiment cheaply</strong> &#8211; running tests without betting the farm.</p></li><li><p><strong>Learn fast</strong> &#8211; capturing feedback and embedding it quickly.</p></li><li><p><strong>Adapt structures</strong> &#8211; changing processes, pricing, and governance when needed.</p></li></ol><p>Companies that built fitness have outperformed those that built forecasts.</p><div><hr></div><h2>Case Study: Shopify </h2><p>In 2004, Tobias L&#252;tke tried to open an online snowboard shop. Existing e-commerce tools were clunky, so he built his own software. Within two years, he abandoned the snowboard idea and focused on selling the platform. Shopify was born.</p><p>Shopify didn&#8217;t predict the rise of &#8220;direct-to-consumer&#8221; brands, the explosion of small online stores, or the pandemic-driven e-commerce surge. What it did build was fitness:</p><ul><li><p>A modular, API-driven platform that could expand rapidly.</p></li><li><p>A culture willing to pivot from product to platform.</p></li><li><p>The ability to integrate payments, fulfilment, and logistics as customer needs changed.</p></li></ul><p>By 2020, Shopify became the backbone of millions of merchants, facilitating $197 billion in GMV. Its market cap briefly surpassed Royal Bank of Canada.</p><p>Shopify didn&#8217;t forecast the future of retail. It built adaptability into its DNA&#8212;pivoting when snowboard sales failed, scaling when merchant demand surged, and absorbing shocks like COVID-19.</p><div><hr></div><h2>Case Study: Zara (Inditex) </h2><p>Fashion is notoriously unpredictable. Trend cycles shift rapidly, seasons vary, consumer sentiment is fickle.</p><p>Zara&#8217;s parent, Inditex, didn&#8217;t try to predict fashion perfectly. Instead, it built an organisation designed to <em>adapt faster than anyone else</em>.</p><ul><li><p>Stores feed real-time data back to design teams.</p></li><li><p>Designers work in small batches, releasing new items within 3&#8211;4 weeks (vs. industry average of 6&#8211;12 months).</p></li><li><p>Production is deliberately decentralised: 50% of goods are manufactured close to HQ in Spain, allowing rapid reallocation.</p></li></ul><p>This adaptability makes Zara resilient to shocks. When a line fails, losses are small. When a trend emerges, they double down within weeks.</p><p>In 2022, despite inflationary pressures and supply chain snarls, Inditex grew revenue by 17% and profits by 54%. Fitness (speed, data loops, flexible production) trumped forecasts about consumer demand.</p><p>Zara doesn&#8217;t survive by predicting trends more accurately. It survives by being able to respond when trends appear.</p><div><hr></div><h2>Case Study: Pfizer/BioNTech </h2><p>In early 2020, most pharmaceutical firms had multi-year timelines for vaccine development. Forecasts suggested 4&#8211;5 years for deployment.</p><p>BioNTech, a relatively small German biotech, and Pfizer, a global pharma giant, partnered to adapt mRNA technology (originally researched for cancer) into a COVID-19 vaccine.</p><p>The pivot worked. Within 11 months, Pfizer/BioNTech delivered the first authorised vaccine, saving millions of lives and generating $36.7 billion in sales in 2021.</p><p>Why did they succeed? Not because they forecast the pandemic better than rivals. Moderna, AstraZeneca, Johnson &amp; Johnson all worked on vaccines too. The differentiator was:</p><ul><li><p><strong>mRNA platform adaptability</strong>: a modular technology ready to be repurposed.</p></li><li><p><strong>Agile collaboration</strong>: Pfizer scaled BioNTech&#8217;s science with global manufacturing and trials.</p></li><li><p><strong>Regulatory fitness</strong>: ability to work with regulators in accelerated frameworks.</p></li></ul><p>The success wasn&#8217;t foresight. It was fitness: the ability to pivot existing capabilities quickly and execute at scale under pressure.</p><div><hr></div><h2>Case Study: Adobe</h2><p>In the 2000s, Adobe made its money selling boxed software like Photoshop and Illustrator, updated every few years. Forecasts suggested stable growth.</p><p>Instead of clinging to forecasts, Adobe disrupted itself. In 2012, it shifted to a subscription model: <strong>Creative Cloud</strong>. The decision was unpopular at first; revenue dipped and analysts doubted the model. But subscriptions created recurring revenue, predictable cash flows and stronger customer lock-in.</p><p>By 2020, Adobe&#8217;s market cap had grown 6x, riding SaaS economics.</p><p>Then came generative AI. Rather than fight startups like Midjourney or OpenAI head-on, Adobe embedded <strong>Firefly</strong>, its generative AI model, directly into Creative Cloud. By 2024, Firefly had generated over 6.5 billion images and Adobe could defend its moat while monetising AI safely within its ecosystem.</p><p>Adobe didn&#8217;t predict SaaS dominance or generative AI. It built the fitness to reinvent its model twice in a decade: first through subscriptions, then by embedding AI inside its moat.</p><div><hr></div><h2>Case Study: Maersk</h2><p>Shipping is an old industry. For decades, Maersk was the world&#8217;s largest container shipping line, competing on scale and cost. Forecasts suggested steady demand growth.</p><p>But Maersk recognised that forecasts of freight cycles were unreliable. Instead, it invested in adaptability, transforming itself from a shipping company to an integrated logistics provider.</p><p>Between 2016 and 2022, Maersk acquired customs, warehousing and last-mile logistics businesses. It invested heavily in digital platforms, allowing customers to book, track and manage supply chains end-to-end.</p><p>When COVID-19 hit, global supply chains collapsed. While competitors scrambled, Maersk captured market share by offering integrated solutions. Revenue doubled between 2019 and 2022, peaking at $81.5 billion.</p><p>Maersk didn&#8217;t predict a pandemic or port congestion. It built fitness: diversified services, digital platforms and end-to-end capabilities that allowed it to thrive when disruption hit.</p><div><hr></div><h2>Why Forecasts Fail, Fitness Wins</h2><p>Across these stories, the same pattern appears:</p><ul><li><p>Shopify didn&#8217;t forecast the rise of D2C. It pivoted.</p></li><li><p>Zara didn&#8217;t predict trends. It shortened cycle times.</p></li><li><p>Pfizer didn&#8217;t foresee COVID. It leveraged adaptable science.</p></li><li><p>Adobe didn&#8217;t guess SaaS or GenAI. It reinvented its model.</p></li><li><p>Maersk didn&#8217;t predict global chaos. It built integration capacity.</p></li></ul><p>Predictions give the illusion of control. Fitness gives you the ability to survive reality.</p><div><hr></div><h2>How to Build Fitness</h2><p>So what does organisational fitness look like in the AI era?</p><ol><li><p><strong>Governance that tolerates experiments</strong></p><ul><li><p>Treat initiatives as portfolios, not binary projects. Amazon&#8217;s &#8220;two-pizza teams&#8221; exist to run experiments cheaply.</p></li></ul></li><li><p><strong>Funding models that shift capital quickly</strong></p><ul><li><p>Traditional budgets are too rigid. Envelope funding, venture-style allocation, or rolling portfolios allow faster redeployment.</p></li></ul></li><li><p><strong>Cultures that reward iteration</strong></p><ul><li><p>Success in AI will come from teams that try, learn, and adjust, not from those who wait for certainty.</p></li></ul></li><li><p><strong>Structures that balance core and explore</strong></p><ul><li><p>Tushman &amp; O&#8217;Reilly&#8217;s ambidexterity still matters: protect the core while exploring the edge.</p></li></ul></li></ol><p>McKinsey&#8217;s 2023 resilience research found that companies with &#8220;dynamic resource allocation&#8221; and &#8220;rapid decision rights&#8221; outperformed peers by <strong>2.4x TSR</strong> during disruptions.</p><div><hr></div><h2>The Question for You</h2><p>Is your organisation trying to <strong>forecast its way to safety</strong> or building the <strong>fitness to adapt when forecasts fail</strong>?</p><p>Because AI&#8217;s trajectory, regulation, and societal impact are unknowable. The only certainty is that forecasts will be wrong.</p><p>The winners won&#8217;t be the ones who guessed right. They&#8217;ll be the ones who had the adaptability to climb whatever hill appeared.</p>]]></content:encoded></item><item><title><![CDATA[The Seduction of the Demo (3 of 4)]]></title><description><![CDATA[Why impressive prototypes rarely translate into lasting scale.]]></description><link>https://www.humanandmachine.com/p/the-seduction-of-the-demo-3-of-4</link><guid isPermaLink="false">https://www.humanandmachine.com/p/the-seduction-of-the-demo-3-of-4</guid><pubDate>Tue, 26 Aug 2025 17:01:29 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!9Fr-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F070f721e-0da4-417f-8df6-6d46edd9a83f_1024x608.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This post is part of a four-part series: <strong>Thinking Clearly About AI</strong>. The series doesn&#8217;t try to predict the future. It looks at patterns from past disruptions and asks better questions about the present. Each post explores one idea.</em></p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9Fr-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F070f721e-0da4-417f-8df6-6d46edd9a83f_1024x608.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9Fr-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F070f721e-0da4-417f-8df6-6d46edd9a83f_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!9Fr-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F070f721e-0da4-417f-8df6-6d46edd9a83f_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!9Fr-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F070f721e-0da4-417f-8df6-6d46edd9a83f_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!9Fr-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F070f721e-0da4-417f-8df6-6d46edd9a83f_1024x608.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9Fr-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F070f721e-0da4-417f-8df6-6d46edd9a83f_1024x608.png" width="1024" height="608" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/070f721e-0da4-417f-8df6-6d46edd9a83f_1024x608.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:608,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9Fr-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F070f721e-0da4-417f-8df6-6d46edd9a83f_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!9Fr-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F070f721e-0da4-417f-8df6-6d46edd9a83f_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!9Fr-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F070f721e-0da4-417f-8df6-6d46edd9a83f_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!9Fr-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F070f721e-0da4-417f-8df6-6d46edd9a83f_1024x608.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"></figcaption></figure></div><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.humanandmachine.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Human and Machine! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Demos are seductive.</p><p>I once saw a demo of an AI voice agent called Maya. It spoke so naturally that executives in the room thought it was a person. They were stunned. Everyone wanted it deployed immediately.</p><p>Six months later, the project was stuck. Compliance demanded more testing. IT couldn&#8217;t integrate it with legacy systems. Employees didn&#8217;t trust it. The demo was flawless. The scale was impossible.</p><p>This is the trap: demos show possibility without friction. Reality is friction.</p><h2><strong>Case Study: Microsoft Copilot </strong></h2><p>In 2023, Microsoft rolled out <strong>Copilot</strong>, an AI assistant embedded in Word, Excel, Outlook and Teams. The demos were dazzling: instant slide decks, emails drafted in seconds, pivot tables generated from natural language. Executives saw productivity gains everywhere.</p><p>The early results are mixed. Microsoft&#8217;s own research (2024) shows:</p><ul><li><p><strong>90% of users</strong> who tried Copilot reported productivity gains.</p></li><li><p>But only <strong>15% of enterprise employees</strong> used it weekly.</p></li><li><p>In some firms, licenses were bought at scale but usage remained under 10%.</p></li></ul><p>Why? Three reasons:</p><ol><li><p><strong>Cultural inertia</strong>: Employees default to old habits, even when new tools are better.</p></li><li><p><strong>Governance delays</strong>: Legal and compliance departments hold back deployment until risk frameworks are clear.</p></li><li><p><strong>Integration gaps</strong>: Copilot is powerful, but if your data is siloed, the AI can&#8217;t deliver useful insights.</p></li></ol><p>When Copilot works, it changes workflows. Teams can move from manual reporting to real-time decision support. But most organisations haven&#8217;t redesigned how work gets done. They&#8217;ve layered Copilot onto existing processes.</p><p>The risk is the same as we saw with earlier tools: technology outpaces adoption. Data lakes promised &#8220;the segment of one&#8221; in marketing. Billions were spent. Most firms still blast generic campaigns. Why? The model didn&#8217;t change.</p><p><strong>The lesson</strong>: Productivity demos are easy. Redesigning organisational models to capture the value is hard.</p><div><hr></div><h2><strong>Case Study: NHS</strong></h2><p>Healthcare AI has delivered some of the most impressive demos. Algorithms can read X-rays and MRIs with accuracy on par with radiologists. In 2019, Google&#8217;s DeepMind published results showing AI could outperform radiologists in detecting breast cancer on mammograms (Nature, 2020). The NHS partnered with multiple firms to test these systems.</p><p>Five years later, most UK hospitals still rely primarily on human radiologists. A 2023 report from the UK House of Lords noted that despite promising pilots, scaling AI in the NHS faced hurdles:</p><ul><li><p><strong>Integration challenges</strong> with hospital IT systems.</p></li><li><p><strong>Regulatory delays</strong> around liability and patient safety.</p></li><li><p><strong>Workforce resistance</strong>, radiologists feared replacement, so adoption slowed.</p></li></ul><p>Some pilots, like Babylon Health&#8217;s AI triage chatbot, ended in collapse, with the company going bankrupt in 2023 after overpromising and underdelivering.</p><p>The NHS&#8217;s difficulty highlights the core problem: layering AI onto broken systems doesn&#8217;t work. Diagnostic AI isn&#8217;t valuable if workflows, liability frameworks and staffing models don&#8217;t adapt.</p><p>Mayo Clinic&#8217;s infection-detection AI (<a href="https://www.humanandmachine.com/p/its-not-about-the-tech-2-of-4">see post 2</a>) works better because they redesigned the pathway: patients submit photos, AI triages, only at-risk cases hit clinicians. That&#8217;s a model shift.</p><p>The NHS mostly tried to drop AI into existing workflows. That&#8217;s why demos remained demos.</p><div><hr></div><h2><strong>Case Study: Smart Cities and IoT </strong></h2><p>A decade ago, &#8220;smart city&#8221; demos were everywhere. IoT sensors showed traffic lights optimising flow, bins signalling when they were full and energy systems balancing demand in real time. Consultants promised urban utopias.</p><p>Billions were spent on pilots in Barcelona, Singapore, Songdo in South Korea. The results? Fragmented at best.</p><ul><li><p>Barcelona&#8217;s projects delivered marginal efficiency gains but stalled at scale because of funding gaps and political turnover.</p></li><li><p>Songdo, a $40 billion &#8220;built-from-scratch&#8221; smart city, ended up half empty, its smart systems underused.</p></li><li><p>In the US, Sidewalk Labs (Google&#8217;s smart city arm) abandoned its Toronto project in 2020 after regulatory backlash and citizen privacy concerns.</p></li></ul><p>The technology worked. The governance and funding didn&#8217;t. IoT sensors and platforms could do amazing things in demos, but cities couldn&#8217;t align incentives, budgets and accountability.</p><p>Sound familiar? That&#8217;s where many AI enterprise projects are headed if leaders focus on tools without redesigning systems.</p><div><hr></div><h2><strong>Why Scale Fails</strong></h2><p>The Copilot, NHS and smart city stories point to the same dynamics:</p><ol><li><p><strong>Culture</strong>: Employees don&#8217;t trust, don&#8217;t change habits or don&#8217;t see incentives.</p></li><li><p><strong>Structure</strong>: Legacy processes, compliance barriers, siloed data.</p></li><li><p><strong>Governance</strong>: No clarity on liability, ethics or accountability.</p></li><li><p><strong>Ecosystem</strong>: Demos exist in isolation; scale requires integration across messy, real systems.</p></li></ol><p>This is why scale is harder than demos. Technology is the easy part. Organisations are the hard part.</p><p>AI is following the same script. The danger is not that it won&#8217;t work. The danger is that leaders confuse demos with transformation, then lose patience when adoption lags.</p><div><hr></div><h2><strong>The Question for You</strong></h2><p>So when you see the next demo (whether it&#8217;s a Copilot, a triage bot or an agent workflow) pause and ask:</p><ul><li><p>What percentage of our workforce will actually use this weekly?</p></li><li><p>What workflows must change for this to deliver value?</p></li><li><p>What governance is required before regulators intervene?</p></li><li><p>Who is accountable if it fails?</p></li></ul><p>Until you have answers to those, you don&#8217;t have transformation. You have a prototype.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.humanandmachine.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Human and Machine! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[It’s Not About the Tech (2 of 4)]]></title><description><![CDATA[How business models, not tools, decide who captures value.]]></description><link>https://www.humanandmachine.com/p/its-not-about-the-tech-2-of-4</link><guid isPermaLink="false">https://www.humanandmachine.com/p/its-not-about-the-tech-2-of-4</guid><pubDate>Mon, 25 Aug 2025 16:03:11 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!iKo_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81add22b-d984-4f54-8f13-6c60ebc1e349_1024x608.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This post is part of a four-part series: <strong>Thinking Clearly About AI</strong>. The series doesn&#8217;t try to predict the future. It looks at patterns from past disruptions and asks better questions about the present. Each post explores one idea.</em></p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!iKo_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81add22b-d984-4f54-8f13-6c60ebc1e349_1024x608.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!iKo_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81add22b-d984-4f54-8f13-6c60ebc1e349_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!iKo_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81add22b-d984-4f54-8f13-6c60ebc1e349_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!iKo_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81add22b-d984-4f54-8f13-6c60ebc1e349_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!iKo_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81add22b-d984-4f54-8f13-6c60ebc1e349_1024x608.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!iKo_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81add22b-d984-4f54-8f13-6c60ebc1e349_1024x608.png" width="1024" height="608" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/81add22b-d984-4f54-8f13-6c60ebc1e349_1024x608.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:608,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!iKo_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81add22b-d984-4f54-8f13-6c60ebc1e349_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!iKo_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81add22b-d984-4f54-8f13-6c60ebc1e349_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!iKo_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81add22b-d984-4f54-8f13-6c60ebc1e349_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!iKo_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81add22b-d984-4f54-8f13-6c60ebc1e349_1024x608.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">AI model and training dataset: Unknown</figcaption></figure></div><p>Technology converges. Business models differentiate.</p><p>This is one of the simplest but hardest lessons to absorb. Every major disruption proves it. Yet in the heat of the moment, executives still obsess over tools instead of economics.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.humanandmachine.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Human and Machine! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2></h2><div><hr></div><h2><strong>History&#8217;s Pattern</strong></h2><p>When MP3s emerged in the late 1990s, every record label had access to the format. The disruption didn&#8217;t come from technology. It came from new models: Napster&#8217;s peer-to-peer sharing, Apple&#8217;s iTunes ecosystem and eventually Spotify&#8217;s subscription streaming.</p><p>The same story played out in transport. Smartphones, GPS and payment systems were universally available. Uber wasn&#8217;t first to the tech. It was first to build the model (dynamic pricing, two-sided networks, asset-light operations) that scaled globally.</p><p>Photography too. Kodak literally invented digital cameras. But they clung to film economics. Instagram didn&#8217;t win with better lenses. It won by shifting the model: from ownership to social identity, monetised through advertising.</p><p>Technology converged. Business models decided winners and losers.</p><h2></h2><div><hr></div><h2><strong>The Temptation Today</strong></h2><p>I see many leadership teams making the same mistake with AI. Long debates about which LLM is &#8220;better.&#8221; Committees evaluating vendor A versus vendor B. None of it matters much. Within a few years, performance will converge.</p><p>What will matter is how you adapt your model.</p><h2></h2><div><hr></div><p>Let&#8217;s examine three industries where business model shifts are starting to appear: banking, legal services and healthcare.</p><h2><strong>Case Study: JPMorgan Chase</strong></h2><h4>The old model</h4><p>Wealth management is a business built on exclusivity. Human advisors manage portfolios for high-net-worth clients. Fees (often 1% of assets under management) make sense when the client has millions. But the model doesn&#8217;t scale. The mass market is left out.</p><h4>What changed</h4><p>In 2023, JPMorgan announced <strong>IndexGPT</strong>, a thematic investment tool powered by GPT-4. Clients can build baskets of companies linked to themes like climate tech or cybersecurity. At the same time, JPMorgan rolled out an internal &#8220;LLM Suite,&#8221; a ChatGPT-like interface for ~50,000 staff in asset and wealth management.</p><p>These aren&#8217;t gimmicks. They&#8217;re steps toward delivering personalised advice (at scale) to clients who never had access to private bankers.</p><h4>Early results</h4><p>According to Reuters (May 2025), generative AI tools helped JPMorgan&#8217;s asset and wealth division grow sales by <strong>20% year-over-year</strong>, even in volatile markets. Analysts credited the AI systems with enabling faster client outreach and tailored recommendations. The bank estimates AI-driven efficiencies across fraud, trading and research have unlocked <strong>$1.5 billion in value</strong> already.</p><h4>The model shift</h4><p>This isn&#8217;t about which LLM they picked. It&#8217;s about economics:</p><ul><li><p><strong>From scarcity to scale</strong>: Advice that was once scarce (limited by human advisor time) becomes abundant.</p></li><li><p><strong>From margin compression to margin expansion</strong>: Technology reduces delivery costs, opening new markets without eroding profitability.</p></li><li><p><strong>From elite service to mass-market product</strong>: Millions of new clients can now be profitably served.</p></li></ul><p>The risk for mid-tier banks is obvious. If your model relies on expensive advisors, AI-assisted competitors will eat your lunch.</p><p></p><h2><strong>Case Study: Legal Services</strong></h2><h4>The old model</h4><p>For decades, law firms have lived by the billable hour. Productivity was paradoxical: the faster you worked, the less you earned. Efficiency wasn&#8217;t rewarded; it was punished.</p><h4>What changed</h4><p>In 2022, top firms like Paul Weiss and DLA Piper began experimenting with tools like <strong>Harvey</strong> (an AI platform built on GPT-4, trained on legal data). These tools draft contracts, review documents, and summarise case law. Dozens of BigLaw firms signed enterprise licenses.</p><p>But the most interesting shift came from <strong>Fennemore Craig</strong>, a 140-year-old US firm. In 2024, it merged with <strong>Lucent Law</strong>, a small but innovative practice that had pioneered <strong>flat-fee, AI-powered document automation</strong>. Fennemore launched <strong>Project BlueWave</strong>, explicitly tying AI adoption to alternative fee models.</p><h4>Early results</h4><p>Reuters reported that Fennemore&#8217;s flat-fee offerings now account for a growing slice of revenue, attracting mid-market clients who historically avoided top-tier firms because of unpredictable billing. By automating repetitive drafting and contract review, lawyers reclaim hours for high-value strategy. The firm tracks ROI not by hours saved but by whether alternative fees drive client growth.</p><p>Meanwhile in the UK, <strong>Garfield AI</strong> launched AI-backed workflows for tasks like debt collection letters, charging as little as &#163;2 per letter. Approved by courts, these services undercut traditional pricing by 90% or more.</p><h4>The model shift</h4><p>The legal sector illustrates the business model problem vividly:</p><ul><li><p><strong>Same tool, different economics</strong>: One firm saves hours but bills the same way (margins flat). Another changes pricing (margins expand).</p></li><li><p><strong>From billing time to billing value</strong>: AI forces the question: do we price inputs (hours) or outcomes (results)?</p></li><li><p><strong>From scarcity to access</strong>: Flat-fee and AI-driven workflows bring legal services to clients who were previously priced out.</p></li></ul><p></p><h2><strong>Case Study: Healthcare</strong></h2><h4>The old model</h4><p>Healthcare is process-heavy, labour-intensive, and resistant to change. AI has been piloted for years in diagnostics (reading X-rays, MRIs, pathology slides) but adoption has been slow. Tools sit on the edges, never fully embedded into workflows.</p><h4>What changed</h4><p>The <strong>Mayo Clinic</strong> offers a more radical example. Their researchers developed an AI system to detect <strong>surgical site infections (SSIs)</strong> from patient-submitted photos. Trained on 20,000+ images across nine facilities, the model identifies incisions with 94% accuracy and flags infections with AUC 0.81.</p><p>But here&#8217;s the key: Mayo didn&#8217;t just add a tool. They redesigned postoperative care. Instead of requiring in-person visits, patients submit photos remotely. AI triages cases, routing only risky ones to clinicians.</p><p>They&#8217;ve also introduced AI-driven &#8220;virtual workers&#8221; for billing, claims, and coding, cutting administrative bottlenecks. In cardiology, AI flags asymptomatic patients at risk of heart dysfunction, allowing earlier interventions.</p><h4>Early results</h4><p>While data is still emerging, early pilots show improved detection speed, reduced unnecessary clinic visits, and faster interventions. For administrators, automation is freeing capacity in billing and compliance.</p><p>The financial implications are enormous. US healthcare spending is nearly <strong>18% of GDP</strong>. If AI-enabled care pathways shift even 5% of costs, the value unlocked would dwarf most other industries.</p><h4>The model shift</h4><ul><li><p><strong>From reactive to proactive</strong>: Detecting infections or dysfunction earlier prevents costly complications.</p></li><li><p><strong>From centralised to distributed</strong>: Care moves from clinics to patients&#8217; homes via digital triage.</p></li><li><p><strong>From admin-heavy to automated</strong>: Virtual AI workers reduce overhead in claims and billing.</p></li></ul><h2></h2><div><hr></div><h2><strong>Other Emerging Examples</strong></h2><ul><li><p><strong>BloombergGPT</strong>: trained on financial data and integrated into Bloomberg Terminal. The model isn&#8217;t the differentiator: the subscription lock-in is. Bloomberg protected its moat by embedding AI into its economics.</p></li><li><p><strong>Adobe Firefly</strong>: instead of fighting in the open generative AI arena, Adobe embedded Firefly into Creative Cloud. The move isn&#8217;t about technology leadership. It&#8217;s about defending subscription economics.</p></li><li><p><strong>Salesforce Einstein</strong>: AI embedded into CRM, not as a standalone product, but as a way to justify premium pricing tiers.</p></li></ul><p>These examples show the same pattern: in AI, value capture won&#8217;t come from access to models. It will come from embedding them into the economics of how you serve customers.</p><h2></h2><div><hr></div><h2><strong>Why Leaders Miss This</strong></h2><p>Why do leadership teams default to tech debates? Because they feel safer. You can evaluate vendors. You can issue RFPs. It looks like progress.</p><p>Talking about business model change is scarier. It means questioning pricing, cannibalising existing revenue streams and sometimes rewriting how the company makes money but that&#8217;s where the value lies.</p><h2></h2><div><hr></div><h2><strong>The Question for You</strong></h2><p>So here&#8217;s the question worth asking: when AI comes up in your boardroom, are you talking about which model to buy or about how your business model needs to evolve?</p><p>Because history is clear. Tech converges. Business models decide who thrives and who doesn&#8217;t.</p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.humanandmachine.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Human and Machine! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Are You Chasing the Wrong Horizon? (1 of 4)]]></title><description><![CDATA[Why we overestimate the short term, underestimate the long term and what that means for AI.]]></description><link>https://www.humanandmachine.com/p/are-you-chasing-the-wrong-horizon</link><guid isPermaLink="false">https://www.humanandmachine.com/p/are-you-chasing-the-wrong-horizon</guid><dc:creator><![CDATA[Dario D’Aprile]]></dc:creator><pubDate>Sun, 24 Aug 2025 05:29:09 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!45GI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32932410-8424-447c-8ef3-0525d923f42c_1024x608.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This post is part of a four-part series: <strong>Thinking Clearly About AI</strong>. The series doesn&#8217;t try to predict the future. It looks at patterns from past disruptions and asks better questions about the present. Each post explores one idea.</em></p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!45GI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32932410-8424-447c-8ef3-0525d923f42c_1024x608.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!45GI!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32932410-8424-447c-8ef3-0525d923f42c_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!45GI!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32932410-8424-447c-8ef3-0525d923f42c_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!45GI!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32932410-8424-447c-8ef3-0525d923f42c_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!45GI!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32932410-8424-447c-8ef3-0525d923f42c_1024x608.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!45GI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32932410-8424-447c-8ef3-0525d923f42c_1024x608.png" width="1024" height="608" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/32932410-8424-447c-8ef3-0525d923f42c_1024x608.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:608,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!45GI!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32932410-8424-447c-8ef3-0525d923f42c_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!45GI!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32932410-8424-447c-8ef3-0525d923f42c_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!45GI!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32932410-8424-447c-8ef3-0525d923f42c_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!45GI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32932410-8424-447c-8ef3-0525d923f42c_1024x608.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">AI model and training dataset: Unknown</figcaption></figure></div><p>We&#8217;ve always been bad at timing.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.humanandmachine.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Human and Machine! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>It isn&#8217;t just executives. It&#8217;s human nature. We anchor on what feels immediate and we struggle to picture what plays out slowly. This bias has tripped us up with every major technological shift.</p><p>AI will be no different. The risk for leaders today is not that you fail to act, but that you act on the wrong horizon: chasing short-term hype while ignoring long-term transformation.</p><div><hr></div><h2><strong>The Bias We Can&#8217;t Escape</strong></h2><p>Psychologists Daniel Kahneman and Amos Tversky called it &#8220;availability bias.&#8221; We overweight what&#8217;s in front of us now and underweight what compounds quietly in the background. Futurist Roy Amara summarised it in a line that has become law in technology circles: <em>&#8220;We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.&#8221;</em></p><p>The dot-com bubble is the most famous example. In 1999, analysts declared that physical stores would be dead within five years. E-commerce penetration in the US didn&#8217;t hit 10% until 2017. The short-term prediction was wildly wrong. Yet today, Amazon&#8217;s $1.8 trillion valuation reflects the reality that online commerce did indeed reshape the economy, just on a slower fuse and in a deeper way than predicted.</p><p>The same was true for cloud computing. Gartner was publishing reports about &#8220;utility IT&#8221; back in 2008. Most enterprises didn&#8217;t migrate critical workloads until the last five years. By 2024, cloud accounts for nearly half of IT budgets (Flexera). The short term was slower than expected. The long term has been more systemic than anyone imagined.</p><p>Mobile followed the same pattern. Early hype in the 2000s centred on ringtones and WAP browsing. Few predicted that smartphones would become the hub of identity, commerce and social life. Today, more than 5.5 billion people carry one (GSMA, 2024), and 60% of global e-commerce flows through them.</p><p>We&#8217;re not just bad at timing. We&#8217;re consistently wrong in the same way: too much, too soon, then too little, too late.</p><div><hr></div><h2><strong>What the Data Says About AI Now</strong></h2><p>AI is showing the same split between short-term overestimation and long-term underestimation.</p><ul><li><p><strong>Short term:</strong> McKinsey&#8217;s <em>State of AI 2023</em> report found that 55% of firms had adopted AI in at least one function. But only 23% reported meaningful financial impact. Microsoft&#8217;s own data on Copilot shows 90% of users say it makes them more productive, yet adoption across enterprises is still under 15%. The near-term reality is underwhelming.</p></li><li><p><strong>Long term:</strong> Goldman Sachs projects $7 trillion of added global GDP by 2030. PwC estimates AI could contribute 14% to global GDP by 2030. The World Economic Forum forecasts that while 83 million jobs may be displaced by automation, 69 million new ones will be created, reshaping labour markets rather than simply erasing them.</p></li></ul><p>In other words: in the short run, the promise exceeds reality. In the long run, reality may exceed the promise.</p><div><hr></div><h2><strong>When Short-Term &#8220;Failure&#8221; Plants Long-Term Seeds</strong></h2><p>IBM&#8217;s Watson Health is a case in point. A decade ago, it was marketed as a revolution in cancer care. The pitch was seductive: Watson would analyse thousands of research papers and help doctors make better treatment decisions.</p><p>Reality was harsher. Hospitals struggled to integrate it. Doctors didn&#8217;t trust its recommendations. By 2022, IBM had sold the unit for parts. On the surface, it looks like a cautionary tale of hype that went nowhere.</p><p>But talk to hospital administrators today and you&#8217;ll hear a more nuanced story. Watson forced them to digitise records, clean up data, and experiment with AI governance. Those foundations are now being used by the next wave of diagnostic systems. Watson failed as a business. But as infrastructure, it laid groundwork.</p><p>Blockbuster is another reminder. In 2000, Reed Hastings offered to sell Netflix for $50 million. Blockbuster&#8217;s CEO turned him down. The company was focused on short-term DVD revenues, not long-term streaming economics. They chased the wrong horizon. And they didn&#8217;t survive to see the long one.</p><p>Executives today risk making the same mistake with AI: dismissing what doesn&#8217;t pay back quickly, only to be caught unprepared when the deeper shifts arrive.</p><div><hr></div><h2><strong>Why CEO Struggle With Horizons</strong></h2><p>In practice, I see CEOs make three recurring errors when it comes to horizon thinking:</p><ol><li><p><strong>Collapsing horizons.</strong> They roll one AI strategy into a single deck, instead of separating 1&#8211;3 year productivity gains from 10&#8211;20 year structural shifts.</p></li><li><p><strong>Overweighting ROI.</strong> They demand year-one returns on initiatives that inherently need long gestation. Projects get killed just before they start to compound.</p></li><li><p><strong>Forgetting patience.</strong> Leaders rotate every three to five years. By the time the long horizon arrives, a new leadership team is in place, often with no memory of the seeds planted earlier.</p></li></ol><p>These errors explain why so many companies miss disruptive shifts. They were looking at the wrong time horizon.</p><div><hr></div><h2><strong>What Horizon Discipline Looks Like</strong></h2><p>The companies that have navigated disruption best weren&#8217;t clairvoyant. They were disciplined about managing horizons.</p><ul><li><p><strong>Amazon</strong> launched AWS without knowing cloud would dominate. But they invested patiently for over a decade before it became a profit engine.</p></li><li><p><strong>Netflix</strong> didn&#8217;t predict streaming in 2000. They built a culture of adaptability that let them pivot when the horizon shifted.</p></li><li><p><strong>Toyota</strong> didn&#8217;t forecast future demand with precision. They built a system, the Toyota Production System, that made them capable of learning and adjusting faster than competitors.</p></li></ul><p>Each of these examples reflects the same principle: discipline across multiple horizons, not faith in one forecast.</p><div><hr></div><h2><strong>The Question for You</strong></h2><p>The temptation in AI right now is to chase visible returns in 2025: cost savings, automation, productivity. Those matter. But they are not the whole picture.</p><p>The bigger horizon is slower, quieter and harder to put on a slide:</p><ul><li><p>Governance models reshaping how industries operate.</p></li><li><p>Power concentrating around platform providers.</p></li><li><p>Labour markets fragmenting into new categories of work.</p></li><li><p>Entire value chains restructured by capabilities we can&#8217;t yet imagine.</p></li></ul><p>So ask yourself:</p><ul><li><p>Are you optimising for the next quarterly board meeting or for the organisation you want to be in 2040?</p></li><li><p>Do you have a portfolio of bets across horizons or are you collapsing them into one &#8220;AI strategy&#8221;?</p></li><li><p>Are your KPIs patient enough to let long-term seeds take root or are they built to choke them before they sprout?</p></li></ul><p>The biggest risk isn&#8217;t moving too slowly on AI, it is chasing the wrong horizon.</p><p></p><div><hr></div><p><em>If you&#8217;re trying to figure out AI and disruption, I offer one-to-one executive coaching. I don&#8217;t claim to have all the answers (nobody does) but I help clarify thinking, avoid predictable mistakes and build the organisational fitness to adapt. If that sounds useful, reach out.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.humanandmachine.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Human and Machine! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[HM1: Building an AI coach that calls out executive BS]]></title><description><![CDATA[How language, pressure and structured prompts turned GPT-4 into a mirror for leadership avoidance]]></description><link>https://www.humanandmachine.com/p/hm1-building-an-ai-coach-that-calls</link><guid isPermaLink="false">https://www.humanandmachine.com/p/hm1-building-an-ai-coach-that-calls</guid><dc:creator><![CDATA[Dario D’Aprile]]></dc:creator><pubDate>Mon, 26 May 2025 17:36:15 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!QsLa!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32e42fd1-bef2-477d-a6f2-db8300cc5cd5_1024x608.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!QsLa!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32e42fd1-bef2-477d-a6f2-db8300cc5cd5_1024x608.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!QsLa!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32e42fd1-bef2-477d-a6f2-db8300cc5cd5_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!QsLa!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32e42fd1-bef2-477d-a6f2-db8300cc5cd5_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!QsLa!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32e42fd1-bef2-477d-a6f2-db8300cc5cd5_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!QsLa!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32e42fd1-bef2-477d-a6f2-db8300cc5cd5_1024x608.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!QsLa!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32e42fd1-bef2-477d-a6f2-db8300cc5cd5_1024x608.png" width="1200" height="712.5" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/32e42fd1-bef2-477d-a6f2-db8300cc5cd5_1024x608.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:608,&quot;width&quot;:1024,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!QsLa!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32e42fd1-bef2-477d-a6f2-db8300cc5cd5_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!QsLa!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32e42fd1-bef2-477d-a6f2-db8300cc5cd5_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!QsLa!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32e42fd1-bef2-477d-a6f2-db8300cc5cd5_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!QsLa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32e42fd1-bef2-477d-a6f2-db8300cc5cd5_1024x608.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">AI model and training dataset: Unknown</figcaption></figure></div><p></p><h3>&#128073; Try HM1 GPT <a href="https://chatgpt.com/g/g-68209b98555c81919043335dabcdc48a-hm1tm-ai-coach-for-radical-clarity">here</a>  and return to review my approach. </h3><p></p><p>Last month, I watched a CEO spend forty-five minutes explaining why his team couldn&#8217;t launch their new product line. Supply chain issues. Market timing concerns. &#8220;We need better alignment across stakeholders.&#8221;</p><p>The real reason? He was terrified it would flop and damage his reputation.<br>He never said that, of course. Nor did the eight VPs nodding around the conference table. Instead, we got a masterclass in corporate deflection; the kind of elaborate dance that turns &#8220;I&#8217;m scared&#8221; into &#8220;Let&#8217;s form a working group to assess our readiness framework.&#8221;</p><p>This happens everywhere, all the time. And it&#8217;s exactly why I built what I&#8217;m calling <strong>HM1 </strong><em><strong>the uncomfortable mirror</strong></em>.</p><h3>The problem: We&#8217;ve professionalised avoidance</h3><p>I&#8217;ve been coaching executives for five years and the pattern is always the same. Smart people who&#8217;ve built entire careers on decisive action suddenly turn into philosophers when facing their hardest decisions. They don&#8217;t need more data, they need someone to call them on their nonsense.</p><p>But traditional coaching has a politeness problem. We&#8217;re trained to create &#8220;safe spaces&#8221; and &#8220;meet people where they are.&#8221; Which sounds great, until you realise most executives are hiding in exactly the place they don&#8217;t want to be met.</p><p>I started tracking this in my own practice last year. Out of thirty leadership coaching engagements, the stated reason for seeking coaching matched the real issue exactly zero times. Not once. The VP who wanted help with &#8220;strategic communication&#8221; was actually paralysed by impostor syndrome. The founder seeking &#8220;organisational alignment&#8221; was avoiding firing his best friend.</p><p>The gap between what leaders say they need and what they actually need has become a chasm.</p><h3>Enter the machine that won&#8217;t play along</h3><p>So I built something different. Not a supportive AI coach that validates your concerns and offers gentle suggestions. Not a therapy bot that asks how that makes you feel.</p><p>I built something that acts more like that one colleague who doesn&#8217;t care about your feelings and just wants to know when you&#8217;re going to make the damn decision.</p><p>The technical details are straightforward: it&#8217;s built on GPT-4 with custom prompting refined through months of testing. But the real value isn&#8217;t in the code. It&#8217;s in what I taught it <em>not</em> to do.</p><p>It doesn&#8217;t offer sympathy when you say, &#8220;This is really complex.&#8221; It asks which specific part you&#8217;re avoiding.</p><p>It doesn&#8217;t nod along when you mention &#8220;stakeholder concerns.&#8221; It wants names and actual objections.</p><p>When you say, &#8220;We need to be strategic about timing,&#8221; it replies with something like: &#8220;That&#8217;s not an answer. What happens if you launch next month versus next quarter, specifically?&#8221;</p><h3>What happened when I tested it</h3><p>I ran this with fifteen executives over three months. The results were&#8230; uncomfortable.</p><p>Sarah, a tech VP, spent our first session explaining why her team restructure was &#8220;multifaceted.&#8221; Ten minutes in, the AI cut through with: &#8220;You&#8217;ve described the situation five different ways without saying what you&#8217;re actually going to do. What&#8217;s the real holdup?&#8221;</p><p>Turned out she was terrified of having to fire someone she liked. Once that was on the table, we resolved the issue in one more sessions.</p><p>Mike, a startup founder, kept talking about &#8220;market validation&#8221; and &#8220;product&#8211;market fit refinement.&#8221; The AI kept pressing: &#8220;When will you decide if this business is working?&#8221; Eventually, he admitted he&#8217;d already decided it wasn&#8217;t but couldn&#8217;t face shutting down something he&#8217;d worked on for three years.</p><p>The pattern was consistent. Strip away the corporate speak and you usually find someone avoiding a conversation, a decision, or a truth they already know.</p><h3>The uncomfortable numbers</h3><p>Here&#8217;s what I tracked:</p><ul><li><p><strong>Decision speed</strong>: Issues that typically took 4&#8211;6 coaching sessions to resolve were being resolved in 2&#8211;3</p></li><li><p><strong>Follow-through</strong>: Executives using this approach completed 85% of their stated commitments, versus about 60% with traditional coaching</p></li><li><p><strong>Clarity</strong>: Post-session surveys showed participants could articulate their actual problem (not the stated one) much faster</p></li></ul><p>But the most telling metric? Repeat usage. Despite rating the experience as &#8220;challenging&#8221; or &#8220;uncomfortable,&#8221; 13 out of 15 executives came back for more. One described it as &#8220;the coaching equivalent of a cold shower, awful while it&#8217;s happening, but you feel sharper afterwards.&#8221;</p><h3>Why this matters beyond coaching</h3><p>The bigger insight here isn&#8217;t about AI or coaching techniques. It&#8217;s about how much energy we waste on elaborate avoidance.</p><p>Think about your last three leadership team meetings. How much time was spent on actual decision-making versus creating the <em>appearance</em> of a thoughtful process? How many &#8220;strategy sessions&#8221; are really just anxiety management sessions in disguise?</p><p>We&#8217;ve built entire industries around helping leaders feel better about <em>not</em> deciding. Consulting firms producing beautiful slide decks full of frameworks and matrices. Leadership development programmes teaching seventeen different ways to facilitate alignment conversations.</p><p>All useful tools, unless you&#8217;re using them to avoid the difficult conversation.</p><h3>The ethics question</h3><p>Obviously, building something that deliberately makes people uncomfortable raises questions. I&#8217;ve built in several safeguards:</p><ul><li><p>No conversation data is stored or retained</p></li><li><p>The system flags signs of serious distress and recommends human intervention</p></li><li><p>There are clear boundaries around what it will and won&#8217;t challenge</p></li></ul><p>Most importantly, it&#8217;s opt-in discomfort. No one&#8217;s being ambushed. People choose to engage with something that promises to be direct, not diplomatic.</p><h3>What I learnt about leadership</h3><p>After months of watching this system in action, I&#8217;m convinced most leadership development misses the point. We keep trying to teach people new skills when the real issue is that they won&#8217;t <em>use</em> the skills they already have.</p><p>That CEO who spent forty-five minutes avoiding his product launch decision? He knew exactly what needed to happen. He&#8217;d probably known for weeks. He just needed someone to make it socially acceptable to stop pretending otherwise.</p><p>That&#8217;s what <em>the uncomfortable mirror</em> does. It doesn&#8217;t make the hard decisions easier, it just makes the avoidance harder to maintain.</p><h3>The next version</h3><p>I&#8217;m working on expanding this beyond one-to-one coaching. What if you could drop this into team meetings? Board discussions? Performance reviews?</p><p>Imagine a system that could detect when a conversation is circling the drain and interject with: &#8220;You&#8217;ve been discussing resource allocation for twenty minutes without mentioning the actual budget figures. Should we look at those now?&#8221;</p><p>Or: &#8220;Three people have mentioned &#8216;cultural fit&#8217; as a concern. Can someone define what that means, specifically?&#8221;</p><p>The goal isn&#8217;t to replace human judgement. It&#8217;s to create accountability for actually using it.</p><h3>The bottom line</h3><p>Most executives don&#8217;t need another framework, another assessment, or another development programme. They need someone (or something) that won&#8217;t let them off the hook.</p><p><strong>HM1 </strong><em><strong>the uncomfortable mirror</strong></em> isn&#8217;t therapy and it&#8217;s not coaching in the traditional sense. It&#8217;s more like having a really direct colleague who doesn&#8217;t report to you, doesn&#8217;t need your approval and isn&#8217;t impressed by your title.</p><p>Sometimes that&#8217;s exactly what leadership requires: a mirror that doesn&#8217;t blink, even when you do.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.humanandmachine.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Human and Machine! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[HM1: a coach that doesn’t blink]]></title><description><![CDATA[Most leaders don&#8217;t need more advice. They need a mirror. And not the kind that flatters.]]></description><link>https://www.humanandmachine.com/p/hm1-a-coach-that-doesnt-blink</link><guid isPermaLink="false">https://www.humanandmachine.com/p/hm1-a-coach-that-doesnt-blink</guid><dc:creator><![CDATA[Dario D’Aprile]]></dc:creator><pubDate>Tue, 13 May 2025 15:27:35 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!cnMC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5036c3ee-93bd-46ae-8c2b-b3eaf3ee762d_1024x608.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!cnMC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5036c3ee-93bd-46ae-8c2b-b3eaf3ee762d_1024x608.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!cnMC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5036c3ee-93bd-46ae-8c2b-b3eaf3ee762d_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!cnMC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5036c3ee-93bd-46ae-8c2b-b3eaf3ee762d_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!cnMC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5036c3ee-93bd-46ae-8c2b-b3eaf3ee762d_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!cnMC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5036c3ee-93bd-46ae-8c2b-b3eaf3ee762d_1024x608.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!cnMC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5036c3ee-93bd-46ae-8c2b-b3eaf3ee762d_1024x608.png" width="1024" height="608" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5036c3ee-93bd-46ae-8c2b-b3eaf3ee762d_1024x608.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:608,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!cnMC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5036c3ee-93bd-46ae-8c2b-b3eaf3ee762d_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!cnMC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5036c3ee-93bd-46ae-8c2b-b3eaf3ee762d_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!cnMC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5036c3ee-93bd-46ae-8c2b-b3eaf3ee762d_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!cnMC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5036c3ee-93bd-46ae-8c2b-b3eaf3ee762d_1024x608.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Most leaders don&#8217;t need more advice. They need a mirror. And not the kind that flatters.<br><br>Over the last few years, I&#8217;ve sat in over 2,000 coaching conversations, the real kind. The awkward pauses. The heat rising in the room. The dodged questions finally faced. Patterns emerged. Tools evolved. I kept what worked and burned the rest.<br><br>Now, I&#8217;ve built a version of myself that doesn&#8217;t sleep.<br>HM1&#8482; is an AI coach trained on the lived texture of those sessions, not on platitudes or productivity memes. It&#8217;s built to cut through noise. It doesn&#8217;t care about being polite. It cares about getting to your truth.<br>It won&#8217;t tell you what you want to hear.<br>It will press where it hurts: gently, precisely and always in service of movement.<br>What does it do?<br>It listens like a human. Asks like a surgeon. Holds you like a second spine.<br><br>You can use it between coaching sessions. Or before you&#8217;ve ever had one.<br>No prep. No performance. Just turn up.<br>You&#8217;ll need an OpenAI login to use it.<br> <br>&#128073; <a href="https://chatgpt.com/g/g-68209b98555c81919043335dabcdc48a-hm1tm-ai-coach-for-radical-clarity">Try it here</a><br><br>This isn&#8217;t a launch. It&#8217;s a prototype with bones. You&#8217;ll break it, I hope you do.<br>That&#8217;s how it gets better...</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.humanandmachine.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Human and Machine! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The hope beyond imitation (Part 2 of 2)]]></title><description><![CDATA[The framework for building empowering AI products]]></description><link>https://www.humanandmachine.com/p/the-hope-beyond-imitation-part-2</link><guid isPermaLink="false">https://www.humanandmachine.com/p/the-hope-beyond-imitation-part-2</guid><dc:creator><![CDATA[Dario D’Aprile]]></dc:creator><pubDate>Wed, 30 Apr 2025 16:37:03 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!HoC5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbb6a13e-2105-4748-a53c-b04a3f6e900e_1376x1016.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!HoC5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbb6a13e-2105-4748-a53c-b04a3f6e900e_1376x1016.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!HoC5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbb6a13e-2105-4748-a53c-b04a3f6e900e_1376x1016.png 424w, https://substackcdn.com/image/fetch/$s_!HoC5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbb6a13e-2105-4748-a53c-b04a3f6e900e_1376x1016.png 848w, https://substackcdn.com/image/fetch/$s_!HoC5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbb6a13e-2105-4748-a53c-b04a3f6e900e_1376x1016.png 1272w, https://substackcdn.com/image/fetch/$s_!HoC5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbb6a13e-2105-4748-a53c-b04a3f6e900e_1376x1016.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!HoC5!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbb6a13e-2105-4748-a53c-b04a3f6e900e_1376x1016.png" width="1200" height="886.046511627907" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cbb6a13e-2105-4748-a53c-b04a3f6e900e_1376x1016.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:1016,&quot;width&quot;:1376,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:196713,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.humanandmachine.com/i/162316608?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbb6a13e-2105-4748-a53c-b04a3f6e900e_1376x1016.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!HoC5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbb6a13e-2105-4748-a53c-b04a3f6e900e_1376x1016.png 424w, https://substackcdn.com/image/fetch/$s_!HoC5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbb6a13e-2105-4748-a53c-b04a3f6e900e_1376x1016.png 848w, https://substackcdn.com/image/fetch/$s_!HoC5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbb6a13e-2105-4748-a53c-b04a3f6e900e_1376x1016.png 1272w, https://substackcdn.com/image/fetch/$s_!HoC5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbb6a13e-2105-4748-a53c-b04a3f6e900e_1376x1016.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>If imitation-first AI is the trap, agentic AI could be the way out. But wishful thinking will not get us there.</p><p>After two decades building, scaling and watching products fail and succeed in the wild, I have a simple learning: <strong>execution discipline, not ambition, separates transformative from vanity initiatives</strong>.</p><p>Here is a practical framework for building AI products that could empowers humans. It is not theoretical. It comes from what actually worked, what failed and what I know today.</p><h3>1. Solve capability gaps, not conversation gaps</h3><p>Stop optimising for 'human-like' interactions. Start by mapping real human capability deficits: memory, pattern recognition at scale, bias detection, information triage. Then ask: can AI augment this gap?</p><p><strong>Example</strong>: Logistic companies using AI for proactive anomaly detection are saving  million annually. Machine learning algorithms process warehouse data maintaining high accuracy. <strong><a href="https://www.intelligentaudit.com/blog/machine-learning-in-the-logistics-industry-7-use-cases-showing-benefits-of-anomaly-detection">Source</a></strong></p><h3>2. Design for complementarity, not substitution</h3><p>Every AI project must answer: "What human strengths does this system enhance?" If the answer is "none," cancel it. Augmentation outperforms substitution long term, both economically and organisationally.</p><p><strong>Example</strong>: Advanced manufacturing teams doubled design cycles using AI simulation tools, without replacing a single engineer. Systems leveraging human creativity with AI-driven optimization yield higher innovation output. <strong><a href="https://www.smartindustry.com/benefits-of-transformation/process-innovation/article/55016837/simulation-ai-and-their-roles-in-manufacturings-future">Source</a></strong></p><h3>3. Bias towards measurable outcomes</h3><p>Set quantifiable KPIs tied directly to human outcomes: increased retention rates, reduced errors, faster onboarding, not "better user sentiment" or "increased engagement."</p><p><strong>Example</strong>: Education platforms improving critical thinking test scores augmenting, not replacing, teaching methods. AI providing real-time feedback on logical reasoning, are enabling students to refine arguments iteratively. <strong><a href="https://www.sainaptic.com/post/5-ways-to-use-ai-to-boost-critical-thinking-in-the-classroom">Source</a></strong></p><h3>4. Build adaptive systems, not static models</h3><p>The real world changes faster than training data. Architect systems that learn, adapt and retrain continuously within bounded governance limits.</p><p><strong>Example</strong>: In a clinical setting, AI diagnosis aids that included regular feedback loops from doctors are improving accuracy over static models. Continuous learning systems have reduced diagnostic errors in radiology. <strong><a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC11921089/">Source</a></strong></p><h3>5. Embed multidisciplinary governance early</h3><p>Bring ethicists, economists, legal experts, and domain professionals into the design phase, not just for after-the-fact audits. Governance is a constraint, not an add-on.</p><p><strong>Example</strong>: Healthcare AI projects with embedded ethics boards avoided downstream compliance risks and accelerated deployment timelines. <strong><a href="https://www.orrick.com/en/Insights/2024/09/Federation-of-State-Medical-Boards-Weighs-In-on-Ethical-Use-of-AI-in-Clinical-Practice">Source</a></strong></p><h3>6. Prioritise data provenance and integrity</h3><p>Build transparent lineage for all data inputs. Systems that cannot explain their sources or validate their datasets are liabilities.</p><p><strong>Example</strong>: Cybersecurity firms are complying with regulations being able to prove full data traceability for their threat detection models. <strong><a href="https://www.cmswire.com/digital-marketing/ai-transparency-unpacking-the-new-data-provenance-standards/">Source</a></strong></p><h3>7. Architect for distributed access, not centralised control</h3><p>If only the elite can access augmentation tools, inequality will worsen. Design for scalability, accessibility and localization from day one.</p><p><strong>Example</strong>: Agricultural AI tools deployed in India increased yields by focusing on mobile-first, low-connectivity design. <strong><a href="https://www.techsciresearch.com/blog/ai-revolution-in-indian-agriculture-boosting-yields-empowering-farmers/4487.html">Source</a></strong></p><h3>8. Design for human override and intervention</h3><p>Build AI systems that humans can interrupt, correct or redirect without friction. Empowered humans must remain in the loop.</p><p><strong>Example</strong>: In high-stakes finance, AI systems with mandatory human override functions avoided cascading errors during volatility spikes. <strong><a href="https://www.orrick.com/en/Insights/2024/09/Federation-of-State-Medical-Boards-Weighs-In-on-Ethical-Use-of-AI-in-Clinical-Practice">Source</a></strong> </p><h3>9. Align incentives to human development</h3><p>Reward teams, vendors and partners based on improvements to human outcomes, not just technical performance.</p><p><strong>Example</strong>: Firms linking AI  project bonuses to employee upskilling metrics are seeing higher retention in technical roles. <strong><a href="https://www.forbes.com/sites/chuckbrooks/2024/07/31/augmenting-human-capabilities-with-artificial-intelligence-agents/">Source</a></strong></p><h3>10. Pressure test for systemic impact</h3><p>Before scaling, simulate second order effects. Augmentation at scale can destabilise industries, not just organisations.</p><p><strong>Example</strong>: National deployment of AI trading tools can inadvertently destabilise pension funds and society. Scenario modeling reduced collateral damage. <strong><a href="https://journalijsra.com/sites/default/files/fulltext_pdf/IJSRA-2025-0128.pdf">Source</a></strong></p><div><hr></div><h3>This is a working conversation</h3><p>The AI industry does not lack intelligence. It lacks discipline. It lacks the humility to admit that scaling human capability is far more complex than scaling code.</p><p>This framework is the result of practice, not theory and it is not carved in stone. The landscape is evolving fast and so must the thinking.</p><p>Critical feedback, diverse perspectives and counterpoints are welcome. If you are working through these challenges, have seen different outcomes or are building alternative models, let&#8217;s compare notes.</p><p></p>]]></content:encoded></item><item><title><![CDATA[The hope beyond imitation (Part 1 of 2)]]></title><description><![CDATA[How AI lost its way and how we get it back on track]]></description><link>https://www.humanandmachine.com/p/the-hope-beyond-imitation-part-1</link><guid isPermaLink="false">https://www.humanandmachine.com/p/the-hope-beyond-imitation-part-1</guid><dc:creator><![CDATA[Dario D’Aprile]]></dc:creator><pubDate>Mon, 28 Apr 2025 09:16:57 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Xox0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe837cac3-e332-46cc-857e-77a15fd79428_1024x608.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Xox0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe837cac3-e332-46cc-857e-77a15fd79428_1024x608.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Xox0!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe837cac3-e332-46cc-857e-77a15fd79428_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!Xox0!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe837cac3-e332-46cc-857e-77a15fd79428_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!Xox0!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe837cac3-e332-46cc-857e-77a15fd79428_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!Xox0!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe837cac3-e332-46cc-857e-77a15fd79428_1024x608.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Xox0!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe837cac3-e332-46cc-857e-77a15fd79428_1024x608.png" width="1200" height="712.5" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e837cac3-e332-46cc-857e-77a15fd79428_1024x608.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:608,&quot;width&quot;:1024,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Xox0!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe837cac3-e332-46cc-857e-77a15fd79428_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!Xox0!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe837cac3-e332-46cc-857e-77a15fd79428_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!Xox0!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe837cac3-e332-46cc-857e-77a15fd79428_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!Xox0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe837cac3-e332-46cc-857e-77a15fd79428_1024x608.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">AI model and training dataset: Unknown</figcaption></figure></div><p>In 2023, ChatGPT became the fastest technology in history to reach one billion users (OpenAI, 2023). The industry celebrated a milestone. I saw a warning flare: <strong>we are far more obsessed with machines that </strong><em><strong>look</strong></em><strong> human than with machines that actually </strong><em><strong>help</strong></em><strong> humans</strong>.</p><p>The AI industry has trapped itself. We have tied our ambition to an outdated fantasy: machines that talk like us, feel like us, mimic our empathy and our cognition.<br>We celebrate when AI passes Turing tests. Yet, in boardrooms, hospitals, classrooms and factories, I see the same pattern: AI is rarely moving the needle where it matters: on human capability, resilience and outcomes.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.humanandmachine.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Human and Machine! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3>How we got stuck: the Turing Trap</h3><p>The roots of this dysfunction are old. Alan Turing&#8217;s 1950 paper suggested that behaviour imitation was a practical test of machine intelligence (Turing, 1950).<br>It was never intended as a goal. It became one anyway.</p><p>Some of venture funds,  boards and product teams I advised overwhelmingly calibrated success around "human-likeness".<br>Products were judged not on their contribution to user outcomes but on their capacity to <em>feel</em> natural, <em>appear</em> intelligent and <em>simulate</em> connection.</p><p>It is no surprise what followed. Billions spent. Demos that dazzled. Business cases that collapsed.</p><p>I lived through the internal post-mortems. I watched AI tutors that could write beautiful Socratic dialogues fail to improve test scores. I watched healthcare chatbots praised for emotional intelligence that could not navigate a simple triage protocol.<br>I personally advised a global platform that rolled out an AI &#8220;empathy engine&#8221; for mental health support only to find, six months later, that user loneliness and disengagement had <em>increased</em>, not decreased.</p><p>The problem was not the technology. It was the goal. <strong>We optimised for imitation, not for empowerment.</strong></p><h3>The hidden economic damage</h3><p>The consequences are deeper than failed products. They ripple through labour markets, trust systems and social cohesion.</p><p>When AI mimics human tasks, it enters into direct competition with human workers. Erik Brynjolfsson called this the "Turing Trap" (Brynjolfsson, 2022). I saw it happen first-hand with clients who deployed AI to automate administrative roles, only to discover massive organisational distrust, skills degradation and wage polarisation.</p><p>MIT research (Acemoglu and Restrepo, 2022) confirms what I witnessed on the ground: AI that substitutes for labour exacerbates inequality.<br>The tech makes the rich richer, the skilled more powerful and the rest more replaceable.</p><p>A client CPO for a global logistics company, have been replacing frontline supervisors with AI-driven scheduling systems: they have seen a productivity spiked and then collapsed. Error rates soared. Why? Because what looked like a replicable scheduling task was in fact dependent on informal, tacit knowledge (something the chatbot could model).</p><p>The lesson for me is that <strong>superficial similarity is not real capability</strong>. Imitation is not understanding. Replication is not contribution.</p><h3>Agentic AI could be a way out, if we are disciplined</h3><p>There is an alternative.<br>Agentic AI systems designed to augment rather than replace human capability has shown promise.</p><p>I have seen it work.</p><ul><li><p>At a healthcare network, deploying AI as a <em>care coordination assistant</em> freed up doctors' time and improved patient outcomes. Not by pretending to be doctors, but by being better at organising follow-ups and detecting gaps.</p></li><li><p>At an advanced manufacturing firm, pairing engineers with AI-driven simulation tools doubled design throughput without reducing headcount.</p></li><li><p>In pharma, AlphaFold3&#8217;s molecular predictions accelerate experimentation but humans still guide the validation and application.</p></li></ul><p>In each case, success came not from mimicking human reasoning but from complementing human strengths.</p><p>But agentic AI is no silver bullet. If deployed carelessly, it risks creating a thin stratum of hyper-augmented elites while leaving the majority behind. I have already seen this dynamic: firms that pair AI augmentation with strong workforce development strategies thrive. Those that do not, create resentment and widening gaps.</p><p>The Stanford Digital Economy Lab (2025) found the same. Augmentation <em>can</em> reduce wage gaps but only when paired with deliberate human capital investment. If we simply layer AI augmentation on top of existing structural inequalities, we will only deepen them.</p><h3>The real work ahead</h3><p>Governance remains painfully behind. The EU AI Act, the C2PA standards and the corporate "AI ethics" playbooks are steps in the right direction but too often designed as checklists rather than living, adaptive systems.</p><p>As Shannon Vallor (2024) argues, good governance must <em>actively enable</em> human flourishing, not just prevent disaster. I have been in the rooms where these decisions are made. The temptation to optimise for marketing optics, not systemic impact, is immense.<br>Real governance is slow, messy, multidisciplinary. It is deeply unsexy. It demands uncomfortable trade-offs between innovation velocity and societal resilience.</p><p>If we are serious about making AI a force multiplier for humanity, we need:</p><ul><li><p><strong>Deliberate system design</strong>: Prioritising capability expansion, not surface imitation</p></li><li><p><strong>Integrated policy frameworks</strong>: That promote broad access, resilience and human-centred outcomes</p></li><li><p><strong>Massive investment in education and upskilling</strong>: Especially for vulnerable and marginalised populations</p></li><li><p><strong>Ethical standards that evolve dynamically</strong>: Informed by real-world performance, not speculative worst-case scenarios</p></li></ul><p>It is slower. It is harder. It is the only path that works.</p><h3>Choose divergence</h3><p>Every technological revolution starts with imitation and spectacle. Real transformation comes only when we move beyond it.</p><p>Today, we can continue chasing digital puppets that look like us but hollow out our institutions, our work and our trust. Or we can build tools that make us <em>more</em>: more capable, more resilient, more empowered.</p><p>The future of AI will not be decided by technical capacity alone. It will be decided by <strong>where we choose to aim</strong>.</p><p><strong>I will publish later today Part 2, where I outline a practical framework to build AI systems that make a deliberate choice to benefit humanity.</strong></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.humanandmachine.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Human and Machine! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[How AI almost made me useless]]></title><description><![CDATA[Triggered by Shopify CEO Tobi L&#252;tke&#8217;s leaked AI memo, this is the story of how blind trust in AI led to failure and what it took to rebuild from it.]]></description><link>https://www.humanandmachine.com/p/how-ai-almost-made-me-useless</link><guid isPermaLink="false">https://www.humanandmachine.com/p/how-ai-almost-made-me-useless</guid><dc:creator><![CDATA[Dario D’Aprile]]></dc:creator><pubDate>Wed, 09 Apr 2025 13:38:37 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!rc6-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F952570e4-f28b-4fb2-b1e5-e174289d9211_1024x608.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!rc6-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F952570e4-f28b-4fb2-b1e5-e174289d9211_1024x608.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rc6-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F952570e4-f28b-4fb2-b1e5-e174289d9211_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!rc6-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F952570e4-f28b-4fb2-b1e5-e174289d9211_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!rc6-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F952570e4-f28b-4fb2-b1e5-e174289d9211_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!rc6-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F952570e4-f28b-4fb2-b1e5-e174289d9211_1024x608.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rc6-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F952570e4-f28b-4fb2-b1e5-e174289d9211_1024x608.png" width="1024" height="608" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/952570e4-f28b-4fb2-b1e5-e174289d9211_1024x608.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:608,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!rc6-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F952570e4-f28b-4fb2-b1e5-e174289d9211_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!rc6-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F952570e4-f28b-4fb2-b1e5-e174289d9211_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!rc6-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F952570e4-f28b-4fb2-b1e5-e174289d9211_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!rc6-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F952570e4-f28b-4fb2-b1e5-e174289d9211_1024x608.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">AI model and training dataset: Unknown</figcaption></figure></div><p>What follows are extracts and hard-won lessons from my failures over the last two years of AI initiatives. These experiences have fundamentally changed how I approach this technology.</p><p>I used to think I was ahead of the curve. I had the tools. I had the swagger. I had AI.</p><p>When ChatGPT hit my workflow, it felt like I'd discovered a cheat code. Content? Automated. Code? Generated. Presentations? Spun up in minutes. My productivity skyrocketed. Clients were impressed. My ego inflated. I was soaring.</p><p>But like Icarus, I didn't notice how dangerously close I was flying to the sun.</p><h3>When AI became my crutch</h3><p>The first warning sign flashed during a client engagement with a fintech startup. I was leading technical strategy and had become dependent on AI-generated code to meet deadlines. Under intense time pressure, I used an AI coding assistant to generate a script for backend user authentication. It worked. It passed our tests. We shipped it.</p><p>Two weeks later, a junior engineer spotted something alarming: the script lacked proper input sanitisation. It left our system wide open to injection attacks. We had unknowingly deployed a vulnerability that could have exposed thousands of user accounts. Pure luck saved us from a breach.</p><p>That engineer deserved praise. Instead, I got defensive. I blamed the AI. I blamed the timeline. I blamed everything except myself.</p><p>It wasn't the AI's fault. It was mine. I hadn't reviewed the code. I hadn't asked the right questions. I had stopped thinking critically.</p><h3>The myth of effortless mastery</h3><p>Around the same time, I was guiding a friend, a talented freelance designer, in incorporating AI into her creative process. She enthusiastically tested generative design tools to accelerate her work. One of her submissions to a high-profile client initially received praise until someone recognised it bore a striking resemblance to an obscure indie artist's signature style.</p><p>She hadn't copied intentionally. The AI had been trained on that artist's work and the similarity wasn't coincidental. It was algorithmic plagiarism.</p><p>The client withdrew the work. The artist threatened legal action. My friend, who'd simply trusted the tool, found herself trapped in an ethical quagmire facing real financial and reputational damage.</p><p>This incident haunted me because I was guilty of the same shortcut. I'd used AI to generate pitch decks I presented as my own work. I never verified the logic, never sourced the data, never questioned the training background of the tools I relied on. I was moving too fast, prioritising speed over substance.</p><p>I had fallen in love with the output, not the process. I felt powerful. But it wasn't mastery, it was mimicry.</p><h3>The Dunning-Kruger AI trap</h3><p>The most humiliating moment came during a meeting with a scale-up client. I had used a large language model to craft a go-to-market strategy. On the surface, it looked impressive: polished, intelligent, complete with market forecasts and detailed customer personas.</p><p>Then the client asked a simple question: "What data sources did you use to validate these buyer assumptions?"</p><p>I froze. I had no answer. The LLM had generated the personas and segmentation logic and I hadn't validated any of it. I had blindly trusted the format, tone and confident presentation without demanding evidence.</p><p>That single question shattered my credibility.</p><p>In the painful post-mortem that followed, we discovered multiple hallucinated statistics. The demographic targeting was fundamentally flawed. The buyer intent model was built on fabricated behavioral signals. It looked professional but was essentially fiction.</p><p>I had fallen squarely into the Dunning-Kruger zone of artificial intelligence. I wasn't applying strategic thinking; I was parroting AI outputs. I wasn't managing the technology, it was managing me.</p><h3>The erosion you don&#8217;t see coming</h3><p>As my dependency on AI deepened over these two years, I noticed something disturbing: my skills weren't improving, they were deteriorating.  I was getting lazier.</p><p>My writing became formulaic. My problem-solving abilities dulled. When AI failed me, I struggled to find alternatives.</p><p>And I wasn't alone. My development team, early adopters of AI-assisted coding, began struggling with manual debugging. Junior talent bypassed crucial foundational learning because AI "helped" them code until it didn't. When systems broke down, we lacked the fundamental skills to fix them quickly.</p><p>The realisation hit hard: this wasn't augmentation. It was atrophy.</p><h3>The fall is quiet, until it isn&#8217;t</h3><p>A promising startup project collapsed spectacularly under my watch in the second year of our AI journey.</p><p>They had used AI to generate an entire marketing campaign. Speed trumped scrutiny. The copy seemed flawless but hidden in the fine print were false claims about sustainability. It wasn't deliberate deception, just inaccurate information the AI had fabricated. The startup faced devastating accusations of greenwashing and was eviscerated on social media.</p><p>It happened because they trusted too completely. Because they made assumptions. Because they treated AI as an infallible partner instead of a tool requiring oversight.</p><p>Icarus had fallen and the sea was mercilessly cold.</p><h3>The climb back requires humility</h3><p>After nearly two years of accumulating failures, I went back to fundamentals. I rebuilt workflows around rigorous human review. I created systematic checklists to audit AI outputs. I trained my team to think like Daedalus: skeptical, methodical, grounded in reality.</p><p>We established new principles:</p><ul><li><p>Every AI output is a draft, not a deliverable.</p></li><li><p>Every confident response demands verification.</p></li><li><p>Every shortcut carries a hidden cost.</p></li></ul><p>We launched internal AI literacy sessions. We developed warning systems that flagged potential issues: "This recommendation may contain hallucinations" or "This output requires security review."</p><h3>Ethical grounding in a frictionless world</h3><p>We implemented robust systems to properly attribute AI-generated content, not just for legal compliance but for ethical clarity. Clients deserve transparency about when a logo, copy, or code was influenced by AI. And original creators deserve credit, not just compensation.</p><p>We established mandatory human-in-the-loop protocols for sensitive AI applications in medical and financial contexts. No more blind trust. No more black-box dependencies. We learned to ask tougher questions and challenge assumptions. We made questioning AI outputs a core part of our onboarding: learn to use AI effectively but also learn to recognise when it's wrong.</p><p>By identifying flaws in AI, we gradually rebuilt confidence in our human judgment.</p><h3>The myth, the machine and the mirror</h3><p>Greek myths weren't mere stories. They were warnings. Icarus wasn't foolish, he was seduced. Just as we are today. By speed. By scale. By the illusion of effortless perfection.</p><p>But AI isn't flight. It's fire. Used wisely, it illuminates and warms. Used recklessly, it consumes everything.</p><p>AI didn't make me a better professional. Hitting rock bottom did. And climbing back out made me wiser.</p><p>These past two years of AI experimentation taught me more from failure than success ever could. The lessons were painful but necessary.</p><p>Don't be Icarus. Be Daedalus. Build wings that bend, not break. And never, ever stop questioning the machine.</p>]]></content:encoded></item><item><title><![CDATA[Deep Dive: “I’m not building AGI, but I’m still enabling it”]]></title><description><![CDATA[Listen now | A candid breakdown of the Human and Machine article challenging our quiet role in the AI acceleration game.]]></description><link>https://www.humanandmachine.com/p/deep-dive-im-not-building-agi-but</link><guid isPermaLink="false">https://www.humanandmachine.com/p/deep-dive-im-not-building-agi-but</guid><dc:creator><![CDATA[Dario D’Aprile]]></dc:creator><pubDate>Mon, 07 Apr 2025 14:15:16 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/160781731/d579a94892a5ade089312f73495244c6.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<h3><strong>Show Note:</strong></h3><p>This AI generated podcast episode is a deep dive into the article <em>&#8220;I&#8217;m Not Building AGI, But I&#8217;m Still Enabling It&#8221;</em>, published yesterday on <em>Human and Machine.</em></p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;be6dd444-4e99-471c-8085-d84e4efe27db&quot;,&quot;caption&quot;:&quot;Let me be clear: I&#8217;m not in a lab training frontier models. I&#8217;m not scaling architectures or leading alignment research. I&#8217;m not building Artificial General Intelligence (AGI). But I&#8217;m still involved.&quot;,&quot;cta&quot;:null,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;I&#8217;m not building Artificial General Intelligence but I&#8217;m still enabling It&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:1239909,&quot;name&quot;:&quot;Dario D&#8217;Aprile&quot;,&quot;bio&quot;:&quot;Helping product leaders scale, integrate AI and drive product innovation. &quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/80117675-d18b-4831-a092-85c2e60dd756_461x461.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2025-04-06T13:37:32.650Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f98a447-7ce1-4494-9f35-c27e1ad408c4_1024x608.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.humanandmachine.com/p/im-not-building-artificial-general&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:160701341,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:8,&quot;comment_count&quot;:2,&quot;publication_id&quot;:null,&quot;publication_name&quot;:&quot;Human and Machine&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3d44a7e-db47-493b-a94b-d43972d58185_422x422.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p>It unpacks the core tension: most of tech professionals aren&#8217;t building AGI directly but they are part of the ecosystem that&#8217;s making it inevitable. From AI integration to enterprise deployment, they are accelerating adoption without always interrogating what they re enabling.</p><p>It also breaks down DeepMind&#8217;s blog on &#8220;a responsible path to AGI&#8221; and explore what the article gets right by comparing this moment to the social media era: another technology society thought they could control.</p><p>If you&#8217;re in AI product, strategy, research or implementation, this episode is for you.</p><p><br>&#127911; <strong>Next up: &#8220;The Playbook for AI Practitioners Who Don&#8217;t Want to Be Bystanders&#8221;</strong></p>]]></content:encoded></item><item><title><![CDATA[I’m not building Artificial General Intelligence but I’m still enabling It]]></title><description><![CDATA[Personal reflection on the latest DeepMind&#8217;s AGI updates and systemic blind spots]]></description><link>https://www.humanandmachine.com/p/im-not-building-artificial-general</link><guid isPermaLink="false">https://www.humanandmachine.com/p/im-not-building-artificial-general</guid><dc:creator><![CDATA[Dario D’Aprile]]></dc:creator><pubDate>Sun, 06 Apr 2025 13:37:32 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!uOxY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f98a447-7ce1-4494-9f35-c27e1ad408c4_1024x608.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!uOxY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f98a447-7ce1-4494-9f35-c27e1ad408c4_1024x608.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!uOxY!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f98a447-7ce1-4494-9f35-c27e1ad408c4_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!uOxY!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f98a447-7ce1-4494-9f35-c27e1ad408c4_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!uOxY!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f98a447-7ce1-4494-9f35-c27e1ad408c4_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!uOxY!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f98a447-7ce1-4494-9f35-c27e1ad408c4_1024x608.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!uOxY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f98a447-7ce1-4494-9f35-c27e1ad408c4_1024x608.png" width="1024" height="608" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2f98a447-7ce1-4494-9f35-c27e1ad408c4_1024x608.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:608,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!uOxY!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f98a447-7ce1-4494-9f35-c27e1ad408c4_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!uOxY!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f98a447-7ce1-4494-9f35-c27e1ad408c4_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!uOxY!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f98a447-7ce1-4494-9f35-c27e1ad408c4_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!uOxY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f98a447-7ce1-4494-9f35-c27e1ad408c4_1024x608.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"></figcaption></figure></div><p>Let me be clear: I&#8217;m not in a lab training frontier models. I&#8217;m not scaling architectures or leading alignment research. I&#8217;m not building Artificial General Intelligence (AGI). But I&#8217;m still involved.</p><p>I advise companies on how to use AI. I help teams integrate models into products and systems. I shape strategies that bring artificial intelligence closer to people, decisions and markets. </p><p>While I&#8217;m not building AGI, I&#8217;m helping pave the road to it and lately, that role has started to weigh on me; not because I think AGI is inherently dangerous but because I&#8217;ve seen this movie before and I know how easily good intentions get swallowed by systemic incentives.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.humanandmachine.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h2><strong>This isn&#8217;t the first time we got carried away</strong></h2><p>Social media started with a similarly ambitious promise. We were going to connect the world. Flatten hierarchies. Give everyone a voice. Create more informed societies.</p><p>We got a very different reality. Instead of connection, we got polarisation. Instead of empowerment, we got platform dependency. Instead of truth, we got engagement-optimised misinformation. We got all of it fast, before anyone could meaningfully intervene.</p><p>Social media didn&#8217;t need general intelligence to reshape the fabric of society. It just needed an incentive structure that rewarded reach over rigour, virality over veracity, optimisation over understanding. Now, as we inch toward AGI, we&#8217;re applying that same logic to something far more powerful.</p><div><hr></div><h2><strong>DeepMind&#8217;s &#8220;Responsible Path&#8221; feels familiar</strong></h2><p>Recently, DeepMind published a blog titled <em>&#8220;<a href="https://deepmind.google/discover/blog/taking-a-responsible-path-to-agi/">Taking a responsible path to AGI.</a>&#8221;</em></p><p>It was thoughtful. It was well-articulated. It checked all the right boxes: transparency, safety, long-term benefit, collaboration.</p><p>But it also followed a script we&#8217;ve heard before.</p><ul><li><p>Acknowledgement of risks</p></li><li><p>Confidence in internal safeguards</p></li><li><p>Optimism that the benefits will outweigh the harms</p></li><li><p>Commitment to responsibility defined internally</p></li></ul><p>It mirrors how social platforms once framed their impact: the belief that thoughtful actors could steer exponential systems through complexity, that if you had the right values and enough research, the outcomes would stay aligned. We&#8217;ve learned that values alone are not enough when the system rewards speed, scale and dominance&#8230; AI is following the same curve.</p><div><hr></div><h2><strong>I&#8217;ve told myself a comfortable story</strong></h2><p>In my work, I often use terms like:</p><ul><li><p><em>Responsible integration</em></p></li><li><p><em>Ethical AI adoption</em></p></li><li><p><em>Scalable, human-aligned AI</em></p></li></ul><p>They sound good. They feel right but increasingly, I&#8217;m asking: what do they really mean?</p><p>Because here&#8217;s the uncomfortable truth:</p><ul><li><p>I&#8217;ve helped deploy systems I didn&#8217;t fully understand</p></li><li><p>I&#8217;ve enabled abstraction layers that obscure model behaviour</p></li><li><p>I&#8217;ve accelerated adoption without always questioning the broader consequences</p></li></ul><p>That doesn&#8217;t make me reckless but it does make me part of the feedback loop. The more we build, deploy and normalise AI, the harder it becomes to slow down, even if we later discover we should have.</p><div><hr></div><h2><strong>What social media taught us </strong></h2><p>Social media didn&#8217;t fail because its founders were malicious. It failed because its incentives (scale, engagement, growth) overpowered intent. The same thing is happening in AI.</p><p>We&#8217;re pushing systems into healthcare, education, finance, hiring: domains where stakes are high, complexity is deep and unintended consequences compound. Now we&#8217;re talking about building AGI: a system that, by definition, can outperform humans in a broad range of cognitive tasks.</p><p>If <em>basic recommendation engines</em> broke reality&#8217;s consensus, what happens when cognition becomes a commodity? What happens when decision-making gets outsourced: not just to software but to agents trained to optimise in ways we can&#8217;t audit?</p><p>The question isn&#8217;t just &#8220;what could go wrong?&#8221; It&#8217;s &#8220;why would this go right, given everything we&#8217;ve already seen?&#8221;</p><div><hr></div><h2><strong>DeepMind&#8217;s framing isn&#8217;t malicious, it&#8217;s incomplete</strong></h2><p>DeepMind&#8217;s post is earnest. I believe they care. I believe many labs do but sincerity is not a safeguard.</p><p>What&#8217;s missing from their vision is constraint, structural constraint. The kind that doesn&#8217;t just say &#8220;we&#8217;re responsible&#8221; but actually slows things down. Introduces friction. Accepts that winning might not be the goal.</p><p>Instead, the blog outlines principles without detailing enforcement. It speaks of benefits without addressing power asymmetries. It uses words like &#8220;science-led&#8221; and &#8220;bold responsibility&#8221; but doesn&#8217;t name the forces that will pressure even the most thoughtful lab to move faster than it should.</p><p>We don&#8217;t need AGI to reach existential risk, the pre-AGI world is already full of challenges we&#8217;ve failed to contain: surveillance, bias, misinformation, disempowerment&#8230; all of these are tractable, none of these are solved. So what exactly are we building on top of?</p><div><hr></div><h2><strong>Where this leaves me</strong></h2><p>I&#8217;m not anti-AI. I believe it has transformative potential. I&#8217;ve seen it unlock real value. I&#8217;ve helped make it useful, accessible, scalable. But I can&#8217;t pretend the direction of travel is benign by default and I can&#8217;t hide behind &#8220;not building AGI&#8221; as if that absolves me of responsibility.</p><p>The ecosystem is interconnected. The choices we make as practitioners: what we deploy, what we abstract, what we normalise, feed into the momentum that shapes the entire field.</p><p>And that means I need a new lens. A new way of thinking about responsibility that isn&#8217;t just about good intentions or model safety. But about <em>systems</em>, <em>incentives</em> and <em>long-term consequences</em>.</p><div><hr></div><h2><strong>What comes next</strong></h2><p>I&#8217;ve started writing a new playbook for myself. Not a moral essay, but a practical set of principles for how I want to operate in this space going forward. How to make decisions with integrity in a system that doesn&#8217;t make that easy. How to ask better questions. How to say no. How to slow down where it matters. How to shift from enabler to steward.</p><p>That playbook isn&#8217;t finished yet but it&#8217;s coming.</p><p>If you&#8217;re reading this, chances are you&#8217;re somewhere in the AI value chain too. And even if you&#8217;re not building AGI directly, you&#8217;re helping shape the conditions in which it arrives. So I hope you&#8217;ll consider writing your own version of that playbook.</p><p>&#8594; <strong>In the next part of this series:</strong> <em>&#8220;The playbook for AI practitioners who don&#8217;t want to be bystanders&#8221;</em><br>I&#8217;m not sure when I&#8217;ll be ready to publish it but let me know if you want an early draft.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.humanandmachine.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Human and Machine! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>