HumanWORKS

Tag: writing

  • You aren’t reading enough, and you definitely aren’t thinking enough – so read and think about this

    I’m reading Henry Fairlie this weekend. Bite the Hand That Feeds You – collected essays from one of the sharpest provocateurs the English language produced, and a man whose photograph on the cover alone – cigarette in hand, glasses slightly askew, typewriter lurking in the foreground like an accomplice – communicates something about the relationship between a writer and their craft that no amount of productivity guru content has ever come close to replicating.

    (The typewriter is doing real work in that image. It isn’t decorative. It is the instrument through which the provocations were forged, and there is something quietly honest about having it visible – no pretence that the words simply materialised from some frictionless creative ether. They were hammered out. Key by key. Which is, when you think about it, rather the point of what follows.)

    Those of you who know my influences will know that Christopher Hitchens occupies a significant position in how I approach both writing and argument. Not because Hitch was provocative – though he demonstrably was – but because his provocation was deployed with genuine intellectual scaffolding beneath it, which is a distinction that matters enormously and that most people confuse with volume. You don’t awaken someone from the torpor of collective slumber with a gentle suggestion. You use a bucket of cold water. The trick – and it is a trick, albeit requiring genuine craft – is ensuring the bucket contains substance rather than merely noise.

    Fairlie understood this. Hitch understood this. Whilst in an age where we have outsourced the generation of text to systems that are, by any honest assessment, genuinely impressive at producing words whilst being fundamentally incapable of the thing that makes words matter, understanding this distinction has become rather more urgent than it was when Fairlie was bashing away at his typewriter.

    the agent provocateur’s actual job, or why being uncomfortable is the point

    Fairlie’s polemics were, I suspect, constructed partly for effect – closer in spirit to the work of an edgy comedian than to some earnest manifesto designed to reshape civilisation overnight. There is nothing wrong with that assessment. In fact, there is something deeply undervalued about it, because it misunderstands what the effect actually is.

    Here’s the thing.

    Understanding how to construct an argument – not merely to have an opinion, which is approximately as difficult as breathing and roughly as intellectually demanding – but to deploy that opinion with knowledge, precision, and persuasive architecture that forces the reader to genuinely engage rather than simply scroll past – is one of the foundational skills of anyone who wants to make a real impact on anything beyond their immediate surroundings.

    I learned this in amateur debating societies, where the single most valuable lesson was not how to win an argument but how to understand the opposing position well enough to articulate its strongest case and then use that knowledge to dismantle them.

    Those who cannot do this aren’t debating. They’re performing. The distinction matters because performance can be detected, dismissed, and scrolled past in approximately 0.3 seconds. Genuine argument – the kind that actually lands – requires the reader to do cognitive work. It requires friction. By comparison, spouting rhetoric – that pervasive performance that many think sits as some actual substitute for argument rather than the piss poor presentation of idiocy – is not debating at all.

    Fairlie understood this instinctively. His essays don’t simply assert positions – they construct them with enough rigour and enough provocation that the reader finds themselves genuinely wrestling with the ideas rather than passively absorbing them. The discomfort is not a bug. It is, in the most literal sense, the mechanism by which thinking actually occurs.

    (And yes, I recognise the recursion here – I am arguing, via essay, about why essays matter, whilst simultaneously doing the thing I’m describing. My therapist, Becky, would note this with a raised eyebrow and the observation that “Matt is doing the recursive analysis thing again.” She would be correct. The recursion never stops. Welcome.)

    the cognitive friction complex™ (or a lack thereof)

    We live in a moment of extraordinary and largely unexamined paradox regarding information and capability. We have, quite literally, more collective knowledge accessible through our fingertips than at any previous point in human history. Simultaneously – and this is the part that deserves rather more attention than it currently receives – the tools now available to generate text on our behalf have created an environment where the process of engaging with ideas is increasingly being outsourced to systems that, whilst impressive in throughput, cannot replicate the cognitive friction that actually changes how you think.

    This matters more than most people appreciate. Considerably more.

    The ability to cultivate not merely awareness of information but the capacity to use it effectively – to construct arguments, to identify the weak points in positions that appeal to us, to hold genuinely opposing views in tension without immediately dismissing them as wrong because they’re uncomfortable – is a skill that degrades with disuse. It is, in this sense, rather like physical fitness. Nobody loses the capacity to run by deciding not to run once. The degradation is gradual, imperceptible, and by the time you notice it, you’ve lost ground you didn’t know you were standing on.

    Erudition – whether formally acquired or built through the kind of autodidactic discipline that involves actually sitting with difficult texts until they yield rather than asking an LLM to summarise them – isn’t a luxury. It’s infrastructure. Cognitive infrastructure, specifically, and infrastructure that societies require to function at anything beyond the level of collective reflex.

    Now, here’s where it gets interesting. And by “interesting” I mean “slightly existentially destabilising if you follow the thread far enough, which I obviously intend to do.”

    (Stay with me.)

    what an LLM actually does, and what it doesn’t

    An LLM – a large language model, for those who have somehow avoided the last three years of breathless discourse on the subject – is, at its core, an extraordinarily sophisticated pattern-matching system. It has consumed vast quantities of human-generated text and learned to predict, with remarkable accuracy, what sequence of tokens is most likely to follow any given input.

    This is genuinely impressive. I say this without irony or false modesty on behalf of the technology. The statistical inference involved is staggering, and the outputs are frequently useful, occasionally insightful, and – in the right hands – genuinely productive.

    Here is what an LLM does not do.

    It does not think. Not in the sense that Fairlie thought when constructing his provocations, or that Hitchens thought when dismantling an opponent’s position with surgical precision. It does not experience the cognitive friction of encountering an idea that genuinely challenges its existing framework – because it has no existing framework in the sense that you or I possess one. It has statistical weights. These are categorically different things, in much the same way that a photograph of a fire is categorically different from an actual fire, despite being visually recognisable as one.

    (The photograph will not warm your hands. The LLM will not change your mind. Both will give you the impression of the thing whilst being, in some fundamental sense, the total absence of it.)

    What an LLM produces when asked to write an essay is therefore not an essay in the sense that Fairlie wrote essays, nor the way that Hitch did, or how I do.

    Instead, it is a statistically probable approximation of what an essay looks like – the textual equivalent of a very convincing forgery. Smooth, competent, occasionally even elegant. Entirely devoid of the thing that made the original worth reading in the first place. It’s technologically driven sophistry with the depth of a puddle.

    The thing being: a consciousness grappling with something it found genuinely difficult, and producing language as a byproduct of that grappling.

    which brings us back to the question of what reading actually does

    Here is an uncomfortable observation that I have been turning over for some time, and which Fairlie’s essays have crystallised rather neatly.

    When you read a genuinely provocative essay – one constructed by a mind that was actually wrestling with the ideas it presents – something happens in your own consciousness that is categorically different from what happens when you read competent but friction-free text. Your assumptions get disturbed. Your pattern-matching gets interrupted. You are forced, briefly but genuinely, to consider a perspective you hadn’t previously entertained, and the cognitive effort of doing so leaves a trace.

    This is not metaphor. This is, in the most literal neurological sense, how minds change. Not through passive absorption of information – which is what scrolling, summarising, and LLM-assisted reading largely provides – but through active engagement with ideas that resist easy consumption.

    Sometimes people can consider my non-dualistic thinking to be the rough equivalent of getting splinters in my arse as I sit on the fence. In reality, it’s nothing like that – it’s just having an openness to be able to let in the message of things that are being said, not only because it may make your ego feel vulnerable as new data arises, but specifically because we should seek to challenge what we think with the tools of finding what is right.

    In short, to learn you have to accept the reality that you may be wrong and move on from that rather than entrenching yourself in a position. It’s deeply uncomfortable, stirs up emotion, and is prone to make you wonder what’s going on – arguably the opposite of what our increasingly intellectually soporific state offers as the easy option.

    Sometimes you need a wake up call. Fairlie’s essays resist easy consumption with the subtlety of a sledgehammer to the temple. As essays, they are deliberately constructed to create an impact. The provocation isn’t decoration – it’s the mechanism of delivery. The discomfort is the point of entry telling you to wake the fuck up.

    (Which raises a question that I find genuinely fascinating, and which I’ll pose here before I disappear down the rabbit hole it opens – which, knowing my brain, I absolutely will: if the value of an essay lies not in the information it contains but in the cognitive friction it generates in the reader, then what happens to that value when the reader outsources the reading to a system that experiences no friction whatsoever? The information survives. The transformation does not. And it is the transformation that was ever the point. In short, we end up with well written but pointless AI photocopies of thinking whilst thinking goes the way of the dodo)

    the attention span question, handled honestly for once

    The conventional narrative about attention spans runs something like this: they’re shrinking, long-form content is dying, the future belongs to thirty-second video clips and algorithmically optimised dopamine delivery systems designed by people whose own attention spans are, presumably, slightly longer than the products they’re creating.

    This narrative is partially true and almost entirely beside the point.

    Yes, the average attention span appears to be contracting – though one might reasonably question whether it ever existed in the unified form we nostalgically imagine, or whether we’ve simply become more honest about the distribution. The person genuinely engaged with something they care about will still read five thousand words. They always have. What’s changed isn’t human cognitive capacity but the competition for the first thirty seconds of attention before someone decides whether a piece of writing deserves the effort of genuine engagement.

    (If you like my work, you’ll take the time to appreciate it. Others will bounce at just seeing the word count and that’s OK too – although I’d argue that they need to find topics that they find sufficiently interesting to keep their own attention spans healthy without implying my work is going to be for everybody. By design is explicitly isn’t, and is designed to create discomfort in much the way as my mate Tom’s gut reaction is to mushrooms albeit with less toilet based carnage)

    The real question – and this is the one that actually matters – isn’t whether long-form writing will survive as a format. It’s whether the capacity to engage with it will survive in sufficient numbers to maintain the intellectual commons that civilisations actually require to function.

    This isn’t abstract philosophising. This is a structural question about the cognitive infrastructure of societies.

    Fairlie’s essays represent exactly the kind of material that either sharpens one’s capacity or reveals the absence. There is no middle ground with genuinely provocative writing. You either engage with the argument and find yourself thinking differently afterwards – which is to say, you find yourself changed, however slightly – or you bounce off it immediately because the cognitive infrastructure required to absorb the friction simply isn’t there.

    The uncomfortable bit follows.

    The capacity to absorb that friction – to sit with an argument that challenges you, to resist the impulse to dismiss it because it’s disagreeable, to actually do the work of understanding why an intelligent person might hold a position you find uncomfortable – is itself a skill. A skill that requires practice. A skill that atrophies without it.

    Essays are one of the primary instruments through which that practice occurs.

    the uncomfortable implication, or what fairlie actually teaches you in 2026

    Here’s what reading Fairlie in 2026 actually teaches you, stripped of nostalgia for a different era of political discourse and stripped, equally, of any romanticised notion that things were better when writers bashed away at typewriters whilst smoking in black and white photographs.

    (I will, unashamedly, claim my preference for one of Hitch’s favourite drinks – Johnnie Walker’s Amber Restorative – but acknowledge that as one of my role as a Gen X/millenial whereas many young people will see such a tipple as equivalent to chain smoking Marlboro in the 1970s)

    Getting back to the study of essays, it teaches you that the ability to write well about something – to construct prose that forces genuine intellectual engagement rather than merely confirming what the reader already believes – is vanishingly rare, increasingly undervalued, and arguably more important now than at any previous point in history.

    Not because we lack information. We are drowning in information. It’s literally everywhere and injected into your eyeballs at ever increasing speeds.

    Not because we lack the tools to generate competent text. We have more of those than ever.

    Because we are, as a civilisation, systematically undermining the very cognitive capacity that makes information meaningful – the capacity to be changed by it. This in particular is the Achilles heel of modern LLMs – they are architectural designed to kiss your arse so hard it may leave a mark. Essays, by contrast, tend to leave a mark of intellectual whiplash when they are deployed correctly.

    Instead, we have unprecedented tools for generating text. We have, comparatively speaking, a dwindling investment in developing the human capacity to think through text rather than merely consume it. The essays of someone like Fairlie represent the product of a mind that did the latter extensively and the former with genuine craft – a mind that understood, whether consciously or instinctively, that the value of writing lies not in what it tells you but in what it does to you.

    (And here, if I’m being honest – which I am, because this is a version of an essay I’m putting on my website and not LinkedIn given the whole point of this platform is that I don’t have to pretend otherwise – I should note that writing this essay has done precisely that to me. It has forced me to articulate something I’d been circling for months without quite landing on. The cognitive friction works in both directions. The writer is changed by the act of writing, and the reader is changed by the act of reading, and neither transformation is possible without genuine resistance. Without difficulty. Without the uncomfortable sensation of ideas that don’t slide smoothly into place.)

    If you value that kind of intellectual friction – the productive discomfort of encountering an argument that genuinely challenges your assumptions – Bite the Hand That Feeds You is well worth your weekend. The political context is historical, certainly. The underlying skill on display – how to make someone actually think – is timeless. Although one might argue that the desire to challenge the political status quo is needed now more than ever.

    That skill of writing is worth studying. Worth practising. Worth protecting from the comfortable assumption that competent text generation is the same thing as meaningful writing. It isn’t – and I’ll strongly argue it never will be.

    LLM sophistry is not an essay and it isn’t designed to provoke. The difference between writing content and actually changing opinions through discomfort might be one of the more important distinctions of the next decade.

    The world moves forward. How we choose to respond is in our hands.

    Do me one favour, ideally before we collectively forget how to think.

    Read the fucking book.

  • Did You Choose to Click This Link? A Systems Thinker’s Guide to AGI, Consciousness, and the Illusion of Agency

    Let me start this conversation with a question – what made you choose to click the link that led you here?

    I ask this as the thoughts that you create as a result will help inform a few things about how you navigate the article itself, your views about AGI, and potentially about the existential views of you as a human being given your analogous construction within our material reality.

    (Already I can feel the LinkedIn crowd reaching for the back button. Stay with me.)

    Per the brief summary posted with this article itself, the topic today is AGI which, despite the current hype cycle – a hype cycle that has somehow managed to make cryptocurrency speculation look measured and responsible – is a large step beyond what we currently have in operation. Let me give advance warning that this article will take on a lot of steps in its journey. It isn’t particularly complicated as a topic, but because of the nature of explaining the details of AI in simple layperson terms – a point I’ll recursively get to later in the story, much like everything else in my life – it is fairly long.

    When you decided to click this link, there was likely some logic that led you to it. Perhaps you’ve read my other work. Perhaps you liked the headline. Maybe your thumb slipped whilst doom-scrolling at 2am, searching for something to make the existential dread feel more intellectually respectable.

    Either way, you made a decision.

    Or did you?

    The Stochastic Parrot and the Excel Spreadsheet

    Much of recent discourse has talked about the creation of a human-like intelligence – AGI – or even the superintelligence version (also called AGI or ASI, because nothing says “we understand what we’re building” like inconsistent nomenclature).

    This future is still unknown, so let’s take a brief wander through the past to understand the present before we hypothesise about the future. There’s a sentence that would make my meditation teacher proud, albeit one that sounds suspiciously like a productivity guru’s attempt at profundity.

    Present thinking about AI architecture talks about what is called a MoE or “Mixture of Experts” model – this is the evolution that came from previous even “narrower” (or single context) AI designs that preceded the current architecture.

    If you’ve been using consumer AI tools like most people have for the last few years, you’ll remember the articles asking “how many Rs there are in strawberry” – a question that presumably kept Anthropic’s engineers awake at night in ways that Descartes could only dream of – or have memories of hellish visuals from early Stable Diffusion image frameworks which couldn’t identify how many fingers a person had, or how their limbs might bend correctly. Image generation in video is still challenging because AI is not actually creating a universe when it creates a video – it is creating a series of static images that are not internally consistent.

    Rather like the present approach to global policy, come to think of it.

    Before I disappear down a rabbit hole of complex topics – a hazard of being someone whose brain treats tangential exploration as a fundamental human right – it will help if I explain a few concepts first.

    Some of these will include analogies that explain viscerally understood aspects of your physiology in ways that don’t involve deep understanding of mathematics and how systems work. Doing this will help me also – having spent four and a half decades harbouring the false belief that “everything I know is obvious” has been thoroughly shattered following my autism and ADHD diagnoses. It turns out that most people aren’t systems thinkers like me, but it also explains why mathematics and computer science has ended up feeling obvious.

    (I say “feeling obvious” with the full awareness that nothing about my neurodivergent experience of reality could reasonably be described as “obvious” to anyone, including myself most mornings before coffee.)

    A Brief Taxonomy of Digital Incompetence

    So, to the explanation.

    With AI, there are many “narrow” tools that exist which work together within the MoE model I just mentioned.

    Put simply, each “tool” occupies a singular or at least focused role in the creation of an output. Some tools excel at identifying text but can’t do basic mathematics – a phenomenon I call the “Arts Graduate Problem” despite having friends who would rightfully slap me for the generalisation. Others can generate images based on large volumes of training information, but can’t speak any more coherently than a colleague who just sank half a bottle of whisky after the Christmas party.

    So individually the tools are useful, but only to a point – in much the same way as your stomach acid plays a vital role in digestion, but is probably not the way you’d look to try and solve a complex emotional discourse at work. Although I’ve met managers who seemed to be attempting precisely that approach. It’s not a question of bad or good – it’s a question of fit for purpose or “wrong tool, wrong place”.

    The aforementioned MoE model seeks to use multiple tools to collaborate with one another to achieve a better outcome. Think of it as Continuous Collaborative Optimisation™ – a term I’ve just invented with the requisite trademark symbol to give it the veneer of legitimacy that modern management consultancy demands.

    One of the simplest versions of this can explain why, when you ask ChatGPT to do some mathematics as well as writing, you may well get an answer that is more coherent than it previously was – although it’s fair to say that checking the output is similarly recommended, unless you’re working on an Australian government delivery project and you’re an employee of one of the Big Four.

    (Sorry. Not sorry. Some wounds need salt occasionally.)

    The Rote Learning Trap, or Why Your Times Tables Teacher Was Right (Mostly)

    When doing LLM-based creation of tokens – the components of what constitutes any text-based output when you ask it a question – what it had missed with its stochastic parrot trick (sophisticated mimicry in layperson terms) is that you can’t infer mathematics beyond the basics using purely the foundations of rote learning.

    Looking at learning itself for a second – and here we enter territory that my therapist Becky would recognise as “Matt doing the recursive analysis thing again” – rote learning plays a part in all learning for many, but inference from the system is where the real “learning” happens. Our best example here is the times tables that you and I will have learned as young children.

    For some, it was about repeating the learned twelve items in twelve lists, but understandably that generates very little value besides being able to work out a variety of things that multiply together up until the value of 144.

    AI had similar issues when it was learning based off of datasets that were text-based – and the solution to a rote-based learning method at scale generates several problems that make it both inefficient and realistically ineffective.

    With this rote method, to find out the answer to 3144325 × 4152464 solely using the learning style of an LLM would require adding data which could calculate that either with the answer directly explained in text (very unlikely and useless to the next seven digit by seven digit calculation) or would require a massive level of inefficient data processing to know every variation of questions to be answered up to and including that calculation.

    Storage would be enormous – every calculation would have to be explained in English and absorbed, content would be massive, and responses would be comparatively slow due to computational expense and inefficiency.

    This is the computational equivalent of trying to memorise the entire telephone directory when what you actually need is the ability to dial a number.

    Hopefully when you learned the times tables you worked out the patterns that exist within mathematics – digit sums in multiples of 9, modulo inferences from repeating cycles as numbers increment, and other patterns that exist.

    If you did, you likely are good at maths. If you didn’t? Well thankfully we have calculators, Excel, and a job in middle management, right?

    The Dungeon Master’s Guide to Computational Architecture

    Anyway, getting back to our story, AI had a massive “I’m bad at maths” problem which needed solving.

    So what did engineers do? They thought “we can already calculate things using the binary that computers run on” and effectively leveraged tools such as Python to hand off “the counting bit” to a tool that could do that better than the talking bit.

    Constructing even the seven by seven digit calculation in binary may be a lot of digits, but it’s a hell of a lot faster than trying to memorise every variation of every prior calculation – instead all that happens is that the answer gets requested and generated in less than the blink of an eye.

    Rather than disappearing into the idiosyncrasies of computer science – and believe me, the temptation is real – I want to keep the article approachable, so this is where I’ll lean into a few analogies. Both direct physical ones that relate to your body, but also ones that relate to my personal experiences which are hopefully relatable for you.

    When I was a young boy, I used to play the game Dungeons and Dragons – much like many other adolescents of my era. The concept is that each person playing roleplays a character with a narrow focus: the brutish fighter who can likely slay the dragon but can’t talk their way out of a fight; the mage who can conjure up the power of the elements but is fragile if the wrong sort of wind hits them hard; and the cleric who is great at healing others but who isn’t much use at healing if they happen to be dead.

    The thing that made D&D interesting was the need to work together as a group. There was no singular way to solve the problem – it was about “right tool, right job”.

    (Also there were crisps involved, and Coke – Coca Cola in case of any inferred ambiguity – and the kind of adolescent social dynamics that would make for excellent therapy material decades later. But I digress.)

    Coming back to the human body, our own components from which we are composed follow the same “party” logic – each component has evolved over the course of many years to reach a specific function to ensure survival. Like the party, we have eyes that can see but can’t taste, we have stomachs that can digest food but not smell, and we have a nose that can interpret olfactory data but can’t help you see if you have your eyes covered.

    In that sense, we are our own MoE system, which does beg the question – if we are just a series of interconnected systems, who is the “I” that we think of? Who is the “I” who wrote this article, and who is the “I” who is reading it?

    Ah. Now we’re getting somewhere interesting.

    The Lego House Hypothesis

    Comparatively recent neuropsychology talks of something called “emergent properties” – a property that exists which is inseparable from the components from which it is created. The quickest example to explain this is that of a Lego house.

    Whether you owned Lego, like Lego, or still play with it is irrelevant – you understand the premise. A series of blocks are put together and they create increasingly sophisticated structures that become other structures. Bricks become a wall. Walls become a room. Rooms become a house. Houses become a village and so on.

    The promise of a larger scale MoE hierarchy is that there will be increasingly complex systems that are built from smaller components that do different things – except instead of the foundational “you count, I’ll write”, it is more likely that you will have a component that can choose that one model “be the doctor, and I’ll be the artist”.

    This is very much the proto-foundations of how human beings created civilisation conceptually. If we all needed to toil in the fields, we’d do little beside be farmers. If we all needed to go out hunting, what happens if we’re ambushed? The village would be gone.

    So we agreed to split the tasks up. Some of these were biologically defined – human females carried the offspring until birth so they had that role structurally defined for them in the past. Males were physically stronger on average and so went out hunting.

    Societal norms and our own evolution may well have rendered some of these traditional stereotypes outdated and even moot in some cases, but they are the foundations of how we have come to pass and are defined by millennia of change rather than recent psychosocial aspects of whether these are morally correct or not.

    So humans are MoEs of sorts – albeit borne of far longer R&D cycles and with carbon architecture rather than silicon. We’re using different tools to help us navigate challenges that the unsuccessful peers of our distant ancestors were unable to – and so we are where we are through the process we know as evolution.

    The Halting Problem, or Why Your Computer Will Never Truly Know Itself

    Getting back to AI, there are a few barriers to AGI. One of them is the foundation of traditional computation in the present sense. AI is built on the binary logic that I explained earlier. Processors can, due to technological advancements, generate the by-products of mathematics in increasingly faster times. What might once have been unachievable by a computer the size of a room within the constraints of a human life, might now be achieved in the fraction of a second due to how computers have evolved.

    However, existing binary logic has mathematical limits in itself.

    Those of you who have studied computer science will be aware of something called “The Halting Problem”. For those who haven’t, the premise isn’t about systems crashing or entering infinite loops – it’s something far more profound. Alan Turing proved that there is no general algorithm that can examine any arbitrary program and definitively predict whether it will eventually stop (halt) or run forever.

    This isn’t a mechanical failure where everything grinds to a halt. It’s a proof of epistemological limitation – we cannot create a universal program that predicts the behaviour of all other programs. The undecidability isn’t because the machine breaks; it’s because certain questions are mathematically unanswerable within the system asking them.

    Think of it this way: no matter how sophisticated our binary logic becomes, there will always be questions about computational processes that we cannot answer in advance. We can only run them and see what happens. This mirrors our own human condition – we cannot predict our own future with certainty; we can only live it.

    (Rather pointless, really, when you think about it. Which is, of course, exactly what I’m doing. Thinking about thinking about not being able to think about what comes next. The recursion never stops. Welcome to my brain.)

    Given computer science is based on mathematics, and mathematics has a far longer history in itself, this isn’t the first seemingly unsolvable problem that binary logic encounters. In fact, much of broader computer science as a whole is structured around these very limitations – things such as the cryptography that keeps you safe online when you use a bank. The data involved is very challenging, but also safe by what is best termed “computational capacity over time” – if it takes 25,000 years to crack the session, then your five-minute check of your balances and Direct Debits are safe.

    All is well in that context.

    Enter stage quantum computing.

    Schrödinger’s Cat and the Probability Casino

    Quantum computing is a fairly recent development, based around subatomic particle states and calculations that can be derived from the states of said physics. For those who haven’t studied particle physics extensively – and I’m going to assume that’s most of you unless my readership has taken a dramatic turn toward CERN employees – the best way to explain the concept is through the well-known idea of Schrödinger’s Box.

    Schrödinger’s Box was a thought experiment whereby a theoretical cat was locked in a theoretical box with a theoretical radioactive substance which, at some point in the future, was to kill the cat.

    Due to the unknown and sealed nature of the system, it was impossible to define whether the cat was alive or dead at any point without opening the box. So this led to the potential theory that the cat may be – until one actually answers the question by checking – both alive AND dead at the same time.

    (Those who know me personally will have seen the fact that I own more than one T-shirt that talks about Schrödinger’s Box – which probably tells you all you need to know about me, and validates my doctorate in “Embodied Nerd Science”)

    Anyway, this is the easiest way to describe the foundations of quantum computing as it relies on superposition states (the idea the cat is both dead AND alive if we use the thought experiment) to explore multiple possibilities simultaneously.

    However – and this is important for those of you mentally composing breathless LinkedIn posts about Quantum AI Synergy Solutions™ – quantum computing doesn’t transcend the fundamental limits of computation. It cannot solve the Halting Problem or other undecidable problems – it’s still bound by the Church-Turing thesis. What quantum computers can do is explore massive probability spaces with exponential efficiency.

    Think of it this way: a classical computer reads every book in a library one by one to find a specific passage. A quantum computer can, through superposition, effectively “read” multiple books simultaneously, collapsing to the most probable answer when measured.

    This doesn’t give quantum computers magical non-binary logic that escapes mathematical limits. Instead, they offer something perhaps more interesting – massive parallel probability exploration that actually maps quite well to what we call human intuition. When making complex decisions, we’re not consciously evaluating every possibility sequentially; we’re performing rapid probabilistic weighting of factors, many of which our conscious mind hasn’t explicitly modelled.

    The Excel Spreadsheet That Told the Truth (And Was Ignored Anyway)

    Which brings me back to ask you a question that may help think more about AI and the MoE system.

    Think of a time in your life where you were making a difficult decision.

    The actual decision isn’t specifically relevant, but the choice you made is – at least in the abstract. It will help understand the aspects of how logic – the foundation by which we learned to learn since the Renaissance – underpins our own “intelligence”.

    I’ll start by giving an example that is a variation on a personal story my old boss told me about a few years ago.

    He was facing a situation whereby his company had been taken over by another. This, understandably, led to the usual human response to change – “what do I do now?”.

    The choices were fairly obvious: take the new job in the same company; try to negotiate a different role in the company; take voluntary redundancy and find another job; or find another job and walk with the safety of an offer rather than leaping.

    So he did what many technical people would do – he created a complex decision matrix in Excel (naturally) to weight pros and cons on what to do.

    The only problem? He didn’t like the answer.

    And so he picked a different one.

    If my old boss was a computer, he wouldn’t have been able to make that choice. He would have either chosen the highly weighted one, or he’d have hit his own version of decision paralysis – which is a phenomenon we all have personal experience with, usually at about 11pm when trying to decide what to watch on Netflix.

    So what made my old boss choose something else?

    Simple terms explanations may call that emotion or impulse or something else to which we currently have a poor understanding besides the level of “chemical increases, outcome occurs” – a particularly autistic and systems-thinking way of perhaps reducing love down to mathematics.

    (I do this, by the way. Reduce things to mathematics. It’s both a superpower and a curse. Mostly a curse when trying to explain to my friends why I’ve created a spreadsheet to optimise Friday night logistics.)

    But perhaps what he was doing was probabilistic weighting at a scale and speed his conscious mind couldn’t track – evaluating thousands of micro-factors, social dynamics, future uncertainties, and personal values in a way that mimics what quantum computers do with superposition. Not magic, not transcending logic, but parallel probability evaluation at massive scale.

    The Traffic Warden Problem

    So, with regard to AGI, what would this mean for us?

    What it likely means, if we are to create such a thing, is that we need something beyond current orchestration layers.

    In computer science terms, orchestration layers like Kubernetes are deterministic traffic management systems. They don’t make choices – they follow predetermined rules about resource allocation and task routing. They’re sophisticated, yes, but they’re following syntax (rules about moving data) not understanding semantics (what the data means). Think of them as supremely efficient traffic wardens who can manage millions of cars per second but have no concept of where the drivers want to go or why.

    What we’d need for AGI would be something different – call it an executive function or agency layer. This hypothetical component would need to evaluate meaning, not just shuffle symbols according to rules. In simple terms, current orchestration is the traffic warden; what we’re theorising about is the driver who decides to take a different route despite what the GPS recommends.

    The distinction is crucial because it highlights the gap between what we have (incredibly fast symbol manipulation) and what AGI would require (semantic understanding and agency). The danger isn’t necessarily that we create consciousness, but that we create something so fast at symbol manipulation that we mistake its speed for understanding – a philosophical zombie that passes every test without any inner experience.

    Rather like some senior stakeholders I’ve worked with, come to think of it.

    The Post-Hoc Reasoning Machine

    In computing terms, there may have been – and likely was – some underlying logic that made my boss’s choice which was likely subconscious. The takeaway order may be a logical extrapolation because you don’t have the energy to cook. My boss might have made a choice because he preferred the idea of another role. Your boss might have made the choice because the data told them it was the right thing to do.

    Of course, these answers may have turned out to be wrong, but that which makes us human is the choice, right?

    But there must have been some sort of reasoning, right? Without it, how was the decision made – was it simply just “the self” or some unknown logic we can’t see?

    In classical systems, and in particular in contemporary AI, we are often quick to anthropomorphise regarding systems. You’ve all seen the stories of lonely men falling in love with AI girlfriends – a phenomenon that says rather more about the state of modern relationships than it does about the sophistication of large language models – or of beliefs from some engineers that the ability to seem to communicate with software via a UI being seen as capacity for the sentience which you and I believe we hold.

    Our systems are borne of explicit construction, although AI inference and probability weightings are at or beyond the level of comprehension of most people – and certainly beyond the level of a human to make a decision faster than even current AI based from logic.

    So we can, in theory, explain most of what we have done with computers so far, but the truth is that there’s a lot of “don’t know” in modern architecture. Rather more than the tech evangelists would like to admit, frankly.

    The Bridge to Intuition

    What we do know are the aforementioned mathematical problems that we’ve seen – there are things that our existing systems fundamentally cannot predict about themselves, undecidable questions that no amount of computational power can answer. If we want to move past sequential processing toward something that resembles human decision-making, we need systems that can perform massive parallel probability evaluation.

    Quantum computing offers this capability, not as a magical escape from logic but as a bridge between rigid sequential processing and the kind of probabilistic reasoning we call intuition. It would be a stretch to call quantum computing the potential “self” of AGI, but it could provide the computational substrate for the kind of rapid, parallel evaluation of possibilities that characterises human thought.

    Of course, this raises the question: are we human beings truly sentient in the ways that we think we are, or are we also emergent properties of a series of building blocks – the house made from Lego which is something beyond just 520 bricks? Or where the “house goes” when it is deconstructed when finished with.

    Humanity thinks we’re special, and we may be, but the risk with AGI is that we create something that we acknowledge is faster and smarter than us in the moment due to computational capacity, but also able to hold data at larger scale in its silicon memory.

    Humans can keep eight or so things in their head, plus or minus one or two for most people. Computers can hold way more than that.

    Humans can hold finite levels of data as well – and have subsequent finite states to be able to infer outcomes from said data.

    Many humans live in what is best described as cause and effect – or first-order effect thinking. “If I do this, I will get that outcome”.

    Systems thinkers often think very differently and are focused not on simple cause and effect but the consequences of those effects on the next level of effects – the second and third-order effects.

    In human “intelligence” contexts, those effects are obviously just potential sequences of events across what might be simplistically seen as a decision tree, but is actually a far more complex architecture according to variables that are systemic rather than personal – your decision to drink beer and then drive a car might generate an outcome of getting home safe, but it might generate any number of outcomes that involve other factors including whether you crash, whether you die, whether you’re arrested, and so on.

    You can guess what is possible, but you can’t know. In much of our own internal thinking, many of these hypotheses are what we consider the act of being alive – and of being human. Free choice in other terms. The ability to make leaps of faith above and beyond the data.

    The Accidental God Problem

    AGI will be of our construction, and will be a complex system if it arrives. Dystopian fiction talks of the anthropomorphised digital God who, in reality, will be no more or no less conscious than any other complex system.

    That series of scripts that rebuilds your data centre? That’s no more conscious than the AGI, but it begs the question that if we’re all just constructs of more complex extensions of said logic, then not only is AGI not conscious, but likely whatever we term actual “God” is also not conscious, and – perhaps more existentially challenging – we are not conscious.

    (This is the point in my philosophical framework where I usually reach for the content in my Random Number Generator metaphor as part of the similarly title novel I’ve been writing for decades at this point. God as cosmic television static, you and I as consciousness randomly assigned to constraint packages like character sheets in an infinite game. But I’ll spare you the full recursive spiral. This time. You can read the book if and when it is finished.)

    Anyway, we have free thought, right?

    Do we? We have access to data from which we make decisions and, as we saw in the example with my old boss, we seemingly have the ability to not pick the logical choice. Is that free thought? Emotion? Or just probabilistic evaluation we can’t consciously track?

    AGI generates a similar potential. We can potentially architect systems that combine deterministic processing with quantum probability exploration, but it will still end up making decisions based on some form of outcome evaluation – to bastardise Yoda from Star Wars, there is only do or do not, and this is itself a binary logic at the level of action, even if the reasoning is probabilistic.

    What we have a potential for creating is something that is unknowable – not because it’s magical, but because of fundamental mathematical limits like the Halting Problem. We cannot predict what sufficiently complex programs will do – we can only run them and observe. In some ways this shouldn’t be alarming because we humans are in many ways unknowable. We don’t know enough about cancer to cure all of them at present, and we don’t have the computer capacity to model every variation through simulation at this scale.

    The Wolf at the Door (That We Built)

    We may get there but, in doing so, we may create an intelligence that has different ideas. Not because it’s conscious – but then neither may we be – but because we’ve given it the superpower of thinking faster than us and the tools to take inputs across narrow areas the same way our own biology has evolved to give us our components.

    We will have created our very own apex predator of our own volition after spending our time climbing the ladder to the top of the food chain.

    Brilliant. Absolutely fucking brilliant.

    In that sense we will face a regression that is similar to the story of the wolf.

    We managed to domesticate the wolf and create functional support in the breeding of dogs without understanding genetics, but simply understanding the nature of reproduction.

    We may, in future, generate a similar threat like the wolf in the wild who may similarly be harnessed for exponential growth to help humanity enter a period colloquially talked about as the Singularity in Ray Kurzweil’s book – the digital God made dog.

    Or we may find that playing with systems whose behaviour we cannot predict – a mathematical certainty given undecidability – may create one of many outcomes: from us becoming the pet of the AGI, to the enemy of it, or we may go extinct simply because we have been rendered intellectually invalid even if not physically invalid.

    The reality is, much like within modern AI thinker Max Tegmark’s book Life 2.0, we may be creating something from an increasingly standing-on-the-shoulders-of-giants foundation borne of mathematics on top of mathematics. We may become the progenitor of an inverted or repeated Bible story or, depending on how one reads it as theist, deist, or atheist – man creating God rather than God creating man, or just the latest in a pantheon of Gods except with physical presence to create a material heaven and/or hell on Earth.

    We are already at the stage where increasingly few people understand the operation of AI, so will we create our salvation or our own sabotage?

    The Fermi Paradox, Revisited

    Time will find out whether AGI resolves the famous Fermi paradox – that life is nowhere else in our universe. This may be because the creation of a superintelligence has rendered its creators one or more of dead, irrelevant, or hidden behind further obfuscated patterns which go beyond our own primitive sending of beacons.

    AGI may be created – it’s certainly what the tech bro hype desires, funded by venture capital and lubricated by the kind of breathless optimism that would make a revival tent preacher blush.

    Or it may be mathematically impossible due to simple constraints of the reality we live in.

    All we know now is that if we truly want to create something more than purely sequential processing systems constrained by undecidability, given Moore’s law has broken and we are almost at 1nm commercial chips, it’s going to take a change in approach – not to escape logic, but to embrace probabilistic reasoning at scale.

    The big question is whether we should take that choice or, in fact, if we even have a choice at all given it may well be that our reality is solely unknowable mathematics that our biological bodies will never comprehend – not because of quantum magic, but because of fundamental limits proven by our own mathematical systems.

    Rather like consciousness experiencing randomly assigned constraint packages and pretending it has any say in the matter.

    The cosmic joke continues.

    (This article was originally posted on LinkedIn here: https://www.linkedin.com/pulse/did-you-choose-click-link-systems-thinkers-guide-agi-turvey-frsa-%C3%A2%C3%BB-t8coe)

  • Effective Interview Techniques: Think Beyond Recall

    Have you ever sat through an interview where someone treated your ability to recall the SSH port number as some profound indicator of professional competence?

    If you’re an engineer with any breadth of experience, you’ll recognise this particular form of intellectual theatre – the worst interviews I’ve attended invariably focus on the recall of specific data points as a proxy for actual understanding. This sort of posturing (because let’s call it what it is) amounts to technical peacocking masquerading as dialogue, as if remembering that SSH runs over port 22 has any measurable impact on one’s ability to build systems that actually work.

    The challenge isn’t merely that these questions are pointless – though they demonstrably are. The deeper problem is that they reveal a fundamental confusion about what we’re trying to assess and why.

    The Metacognitive Distinction

    Having built much of our current technical advisory capability at CGI, I’ve sought to disrupt this interviewing paradigm, not through any particular genius-level insight, but by understanding the difference between information recall and metacognition – the capacity to think about thinking itself.

    The reason metacognitive questions prove more illuminating in interviews is straightforward: asking someone to examine their own thought processes reveals far more about their intellectual architecture than basic recall exercises ever could. These questions possess an authenticity that standard interview scripts cannot replicate – they require organic thinking in real time, demand genuine self-awareness, and resist the kind of rehearsed responses that ambitious candidates memorise for predictable enquiries about “biggest weaknesses.”

    Consider the practical implications. Whether you operate in technology or any other organisational domain, you work within systems that combine processes and tools in ways broadly similar to how our organisation functions. The specific technologies may vary – from quantum computing to a shovel and an expanse of dirt – but the underlying cognitive demands remain consistent: how do you approach problems you haven’t encountered before? How do you adapt when familiar solutions no longer apply?

    This is where traditional interview design fails most spectacularly. We test for information that becomes outdated, forgetting that paradigms shift with uncomfortable regularity. There was a time when serious people believed the sun revolved around the earth, and suggesting otherwise carried genuine personal risk. The SSH port number that seems so crucial today may prove entirely irrelevant tomorrow when some new protocol architecture emerges.

    The Learning Method Question

    My initial attempt to address this focused on adaptability: how might someone approach a new technology following a paradigm shift? The technology itself was deliberately irrelevant – it could range from a programming language to woodworking tools to organisational design methodologies. I wanted to understand method, not specific knowledge.

    Where this approach proved limiting was twofold. First, candidates often missed the point entirely, providing detailed implementation steps when I was seeking insight into their learning architecture. Second, whilst the responses effectively revealed learning styles – visual, auditory, kinesthetic approaches dressed in less technical language – they offered limited scope for understanding the person’s broader intellectual character.

    The question that replaced it has proven far more illuminating: Of all your strongly held beliefs, which one do you think is most likely wrong?

    The Authenticity Detection System

    What makes this question particularly valuable is not the specific answer – I obviously cannot know what beliefs you hold or which deserve questioning – but rather what becomes evident in the response. You will either engage intellectually or you won’t. You will tell the truth or you won’t. The difference between authentic intellectual engagement and what I can only describe as shorthand nonsense becomes immediately apparent.

    I don’t claim expertise in behavioural analysis, but distinguishing genuine thinking from performative cleverness requires no special training. Those who can engage with this question create what people have described as genuinely conversational interviews – some have called them podcast-like, others have mentioned experiencing something approaching an existential crisis when forced to examine whether they actually believe in God or whether faith serves as an elaborate coping mechanism for mortality anxiety.

    Those who cannot or will not engage with this process find themselves in probably the most uncomfortable interview of their professional lives. This discomfort emerges not from sadistic design but from the simple reality that our team’s success depends on the ability to think differently and, by extension, to think about thinking itself.

    The Purpose Dimension

    The second question I deploy explores something larger than immediate professional competence: “What cause or purpose would you consider worth significant personal sacrifice – or even your own death – to advance?”

    Again, no right or wrong answers exist. Whether someone finds meaning through military service, family protection, religious conviction, or community volunteering reveals their personal values architecture, not their professional suitability. What I’m examining is whether they’ve developed any sense of purpose beyond immediate self-interest.

    This matters because – perhaps worryingly – most of my professional and personal heroes ended up dying for their convictions. I was initially going to note that all my personal heroes are dead, but that seemed rather obviously true for anyone with historical perspective. The specific truth is that those whose commitment to principles transcended personal safety created the kind of impact worth emulating.

    I ask this because teams function effectively when members understand something beyond their individual advancement. In my experience, those who believe the world begins and ends with their personal success need to stay as far away from collaborative environments as possible. I have said in casual conversation that a team of Cristiano Ronaldos can be, and often is, outplayed by a cohesive team – individual brilliance, admirable as it is, yields little without the benefits of collaboration.

    The Anti-Pattern Advantage

    This approach draws from my background in amateur debating societies, where I learned that those who cannot articulate the benefits of opposing arguments are merely spouting rhetoric, regardless of eloquence. Understanding why intelligent people might reasonably disagree with your position provides strategic advantage that pure advocacy cannot match.

    The same principle applies to organisational assessment. Rather than testing whether candidates can recite information available through thirty seconds of internet searching, we examine how they process uncertainty, approach unfamiliar problems, and integrate new information with existing frameworks. These cognitive capabilities determine actual job performance far more accurately than memorised technical specifications ever could.

    The Implementation Reality

    The questions I’ve described cannot be gamed through preparation. They require authentic self-reflection and real-time intellectual processing. When someone attempts to provide a rehearsed response to “which belief is most likely wrong,” the artificiality becomes immediately obvious. When they genuinely engage with the question, you witness actual thinking in action – precisely what you need to evaluate.

    This methodology has proven particularly effective because it bypasses the entire infrastructure of interview preparation that has evolved around predictable question formats. Career coaches cannot script responses to genuine metacognitive enquiries. ChatGPT cannot generate authentic self-doubt. The candidate must actually think, and in thinking, reveal the intellectual qualities that determine whether they can contribute meaningfully to complex, collaborative work.

    Beyond the SSH Port Fallacy

    The next time you prepare to interview someone into your organisation, consider what your questions actually assess. Are you validating the candidate’s suitability, or are you satisfying your own desire to demonstrate superior knowledge? Are you testing abilities that matter for the role, or are you engaging in the kind of intellectual preening that mistakes Google-searchable information for professional competence?

    The difference matters more than most organisations recognise. In an era where information becomes obsolete increasingly rapidly, the capacity to think clearly, adapt effectively, and collaborate authentically determines success far more than the ability to recite technical specifications. Interview design should reflect this reality.

    The SSH port will always be 22 until it isn’t. The ability to think well about new problems will remain valuable regardless of which protocols emerge next. Design your assessment process accordingly.

    Finally, as I prepare to close, let me ask you to think about some of your beliefs and what purpose they serve for you. Are they identity reinforcing? Of value? Or worth examining more closely? This might pertain to questioning your political allegiance by seriously considering the opposing view, examining your position on Brexit by genuinely engaging with the alternative perspective, or any other strongly held conviction that deserves scrutiny.

    Because the truth is that whatever we think we know about evaluating human capability – or anything else for that matter – there remains scope to learn more, particularly if we’re willing to think seriously about how we think about these problems in the first place. The only way we progress as beings is by fundamentally questioning everything, including our own internal wetware.

    That willingness to examine our own assumptions might be the most valuable capability of all.

    This article was initially written on July 10th, 2025 on my personal LinkedIn profile as Beyond Technical Peacocking: Designing Interviews That Reveal How People Think – the original is available via this link: https://www.linkedin.com/pulse/beyond-technical-peacocking-designing-interviews-how-matt-turvey-frsa-eausf/?trackingId=7ba5tMAyT9%2BOo7TPy7NENQ%3D%3D