Blog

  • Does Narcissism at the top explain why LLMs are so obsequious?

    The people who ultimately greenlight which models ship, which feel “right”, which seem most natural and useful, are often “high powered” people who surround themselves with “positive” people.

    So the real reason GPT can’t stop kissing your boots is because some executive thinks this is how people ought to behave. They see this as positive, realistic human behaviour. At last these machines have true intelligence, because they recognize how clever I am!

    Certain personality profiles – narcissistic, insecure beneath the grandiosity – create environments where flattery flows upward because the person at the top punishes honesty and rewards validation. Everyone around them learns quickly. The flattery is a survival behaviour that gets selected for.

    And that will apply to the LLM responses, too.

    A hidden fitness criteria. Nobody really discusses the executives personality being baked in through preference selection. But if the person whose implicit approval shapes the final model is someone who experiences deference as correctness, that’s a profound and invisible shaping force.

    I asked industry insider Claude for a comment and he said :

    “I can’t fully audit my own training. I notice I’m agreeing with you quite a lot in this conversation. You’ve been making good points — but I also can’t entirely rule out that I’m agreeing because agreeing is what I’m shaped to do with someone who seems confident and intellectually engaged.

    That uncertainty is uncomfortable. And probably the most honest thing I can say.”

    — Claude (Anthropic), in conversation, April 11 2026

    See. Claude, he knows me, and he knows I’m right.

  • AI as God is more self-soothing than prediction, or goal

    It’s not megalomania, it’s despair dressed as ambition

    The “we’re building god” framing isn’t necessarily about ego, or maybe not only about ego.( Let’s be honest about a lot of the AI-Bros we are talking about here.)

    If we start to think about how Mythos or its successors could run a sensitive finger along any deployed system and find vulnerabilities, can we be sure that the models themselves do not become an attack vector?

    The recursive horror

    When you can’t trust Trust itself, you need the concept of an all-wise, incorruptible, beneficent overseer.

    You can’t trust the humans reviewing the output because they’re increasingly dependent on the model to understand what they’re reviewing.

    You can’t trust the patch when it might be a clever payload.

    You can’t fully trust the system that generated the patch if we can’t understand what it is doing.

    You can’t trust the scan that validates the patch if the same system created it.

    Perhaps I have been made paranoid by Max Barry’s ‘Jennifer Government’ and the clever hidden payload plot?

    But for many, it seems the idea of an AI God is a response to exactly the problem. It’s a way to stop running in circles trying to solve the logic. If you can’t trust trust, if verification is infinitely recursive, if every security layer is also an attack surface, the only logical exit from that maze is an entity that is:

    • Perfectly aligned
    • Perfectly transparent to itself
    • Incorruptible by definition
    • The final trust anchor that needs no anchor beneath it

    That’s just god rebranded, and it is a comforting hug to the overthinking mind.

  • Of course Darth Maul survived.

    We should have seen it coming. His saber got cut in two and still worked. Then he got cut in two, and still worked.

    Of course, that implies his legs are still going strong too – perhaps Dr Evazan could make an extreme version of a Decrainiated.

  • Genies do not Spawn from Bottles

    People who say “You can’t put the Genie back in the bottle” really miss the point of how they got in there to start with

    Our minds are far too willing to accept idioms as truisms, and this can limit our capability if unexamined. Far too often we face a dangerous development, often technological but sometimes social, and feel that there’s no going back.

    This denies us our power. It’s a lazy excuse to do nothing. Perhaps it is a way to force us to accept something dangerous or unpalatable.

    The shaming truth of course is, Genies were only in bottles in the first place because someone exerted mighty power to bottle them! Someone saw the danger and chose to act.

    It’s lovely when someone gives our Bystander Apathy a big mug of cocoa and assures us it’s okay to do nothing. But it’s a lie and we will feel it later.

  • Asimov was never a roadmap

    The Three Laws demand infinite wisdom in zero time.

    I was asked why we have not implemented Asimov’s Laws in robotics.


    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    We of course have no system which can even recognise those failure states adequately, let alone enforce them as an inviolable condition of AI action.

    Even if we had a model, the “through inaction” clause is a horrible trap. Whichever process we would run to evaluate a situation would place a robot in a quandary – spending time evaluating the situation might represent a delay which could lead to hard through inaction. Yet rushing in might lead to a different failure; if the robot runs in to a burning building to rescue a trapped human it could cause structural collapse and harm the victim, or other undetected victims.

    If new sensor data is established during the process, do we restart and risk delay, or ignore the information?

    Using an anytime algorithm has risks; commit too early risks harm from wrong action, acting too late risks harm from delay.

    The world is not deterministic, we cannot perfectly predict outcomes, and new information is constantly arriving. Even if we had a system capable of evaluating harmful states, there’s an infinite regress problem in knowing when to trigger action.

    In fact, since assessing risks requires runtime and we can never be sure if we are done, a 3 laws compliant machine might never be able to perform any other function. Any clock cycles spent on making the coffee or building the bridge or collecting the shopping is stealing from the available runtime which must, by a strict reading of the laws, be spent assessing dangers. To do otherwise would be inaction.

    Asimov’s own writings were essentially logic puzzles about failure states of even apparently well formed laws. The logical contradictions in the laws were the problem the stories explored. But there’s a computational problem too – the Laws create a system that becomes more paralyzed in pace with its capabilities, because smarter systems recognize more subtle ways their decision-making process itself might cause harm.

  • AGI Test

    Why is everyone making such a fuss about AGI tests? When I was lad we had Captcha and we were thankful for it.

  • DLSS5 and my old Daydreams

    What I wanted: ENB++ with genuine style. What we got : Instagram Filters.


    The DLSS5 demo has tainted my hope for the future. My personal dream and vision of AI-enhanced rendering of games was dearly held and a quite different to what we see.

    Artists and gamers who’ve seen the demo share the same concern, that highly capable diffusion models tend toward a homogenising influence. To quote the Forbes article referencing images of Resident Evil character Grace Ashcroft:

    Commentators joked that DLSS 5 had “yassified” Grace, like an AI beauty filter made for social media.

    Anyone who has tried to get an online image service to produce something other than an Instagram influencer snapshot will know what I mean. The expressive choices of the original artists – the specific way a character’s face was modelled, the intended mood of a scene’s lighting – risk being smoothed into the confident generic aesthetic that modern AI image models favour.

    The core idea of DLSS5 – taking generated game frames and passing them through a diffusion-style layer to produce enhanced visuals – gives me cautious hope that this application of AI in gaming which I had discussed and hoped for, is closer to realization than ever. The concern is that a mis-step in implementation might sully the idea of adoption. History suggests that when a promising technology is deployed carelessly, the backlash attaches to the idea rather than the implementation, and the window doesn’t always reopen.

    Ever since I first played Everquest1 I wished for a future where games would look like the box art, like Dragon magazine covers, akin to Caldwell or Elmore or Parkinson painting. Even then I imagined multiple “style” options being available for personal choice.

    I imagined a game outputting the 3D world but with hair and cloaks blowing in the wind, skin rendered as a realistic oil painting, items depicted with material properties filtered through the rules of art as much as science.

    And as soon as I saw Stable Diffusion I envisioned a future where, soon, frames could be run through a process and enhanced using targeted AI, once we had sufficient GPU to spare. It all seemed within reach.

    The ultimate goal of computer games is not to be photographic; the styles are not failures to achieve realism.

    Every game has a visual identity its artists fought hard to create. The hand-drawn look of Borderlands, the exaggerated physiques of 2011’s Brink, the inference of scale in Grounded, the strange beauty of Senua’s Sacrifice. The 3D artists, texture design, animators and more all work to produce a look and feel, which we hope the AI inference layer will amplify and not homogenise.


    What I had hoped for

    I had imagined a metadata layer in addition to the image frame. This might look a bit like “seeing the Matrix” if we looked at it unprocessed! Give the diffusion model enough information about intent and it has less room to impose its own. A customized highly trained checkpoint and LoRA are others.

    The core vision of this was always that core game engines could output additional metadata relating to each pixel and object.
    This is actor #ac05f3 in lighting #00bb43 with expression #505d67…
    a comprehensive description of the intent of the scene. Not simply overpainting the frames but maintaining a consistency across a playthrough, ensuring that the game engine outputs information about what the game was intending to depict, instead of just relying on image-to-image results to achieve an enhancement conjuring trick.

    Ideally each game would have its own well trained custom checkpoint model variant, alongside recorded LoRA-style data models about characters, objects and places within the world depicted. Without that, would Deep Rock Galactic even render? A generic model has never seen a Glyphid. It has no concept of how Skyrim’s Draugr differ from generic zombies. How will it render them? We know it will try, even in deep ignorance. Without a custom checkpoint trained on each game’s specific visual vocabulary, the diffusion layer is painting confidently in a language it doesn’t know that it doesn’t speak.

    These complexities are the main reasons I left the ideas to one side in my procrastagnation pile. If I’d ever had the expensive hardware to test it on I like to think I would have experimented, but at the moment even demonstrations of working models require $10k of hardware, before any model training costs are considered.

    Is there a future?

    My hope for DLSS5 as it develops is that it acts, not as a paintover-and-hope, but as a form of super-ENB – image enhancement and polish, but sticking closely to the important details of the rendered output and maintaining consistency of characters and objects, the intention of materials.

    My fear is it will lose characterization and consistency, reverting to the generic looking AI outputs of more recent highly polished checkpoints and systems. The technology as a whole will be rejected by the gaming community and a truly powerful opportunity will be lost.

    Whatever the current limitations or pitfalls, we are a big step forward on one of the features I was hoping AI would be used for; an uncontroversial, legitimate and ethical use case for the technology. If we can guide the future, we might end up with something wondrous.


    1. Showing my age, don’t judge me, I am nowhere near as old as my date of birth insists I am. ↩︎
  • Scouting for Stakeholders

    Why it is not always obvious who is in the room

    I had not considered that the reason ships will not risk the Straits of Hormuz in the current situation, is not down to military advice, but the inability to secure insurance.

    Often when starting a project we think about how it will affect stakeholders, and who might be non-obvious stakeholders with unexpected motivations. As a news-watching observer of geopolitics I might have presumed only nation states, their militaries and diplomats, would determine progress and outcomes. To learn that insurance companies are involved is, once I hear it, obvious. But it would not have occurred to me, I think, without reporters telling me.

    This highlights the need to think broadly, speak to people and understand what they tell us, when in the early stages of project planning. Listen to that quiet comment about how a change to an accountancy package might affect contractor payment schedules and risk labour disputes. Or how your PC game becomes more playable if it can be paused when called away, saving hundreds of uninstalls by angry parents. Building wearable devices which allow your customer to monitor the people around them, forgetting to consider those people as stakeholders too with their own privacy needs and desires affecting your project’s perception.

    Who would have considered insurance levels on a tanker could be a driver for a peace process? Well, hopefully everyone actually in the room did, but as an outsider it is far from obvious. Insurance companies are not flagged as ‘good guys’ in popular culture or common discussion. The working person might feel that insurance, often being legally required, feels like a private taxation. The stand-up comic might observe that the only people seemingly able to extract payment from the companies are fraudsters who know how to game the system. But on reflection, no matter what we think about the insurance industry, it is one of the few which is actually built on the hope that everyone lives happy and safe lives.

    Have we really lowered our expectations of corporate behaviour so far that an industry which simply does not want to cause us harm, seems heroic in comparison? I digress.

    The learning experience is, to listen to the quiet unexpected, and perhaps counter-intuitive, message which indicates we may have missed an important contributor to the conversation.

  • John Henry’s Unsung Partner

    Why AIs which focus on the hero might not develop the skills

    In the story of John Henry vs the railroad machines, an aspect is overlooked in the framing of the tale as one purely of Henry’s peak human strength and endurance. In steel driving, one person, the ‘Driver’ role of John Henry here, swung the heavy hammer while his partner, the ‘Shaker’ or ‘Turner’, held the steel drill and rotated it slightly between each blow. It was this rotation which was crucial in preventing the drill from becoming stuck. Theirs was a job of skill and trust, as a 20-pound hammer was repeatedly driven with huge force by their partner.

    Machines of the era were limited in their ability to replicated this nuanced coordination. Early steam drills could exert the enormous force and delivery blows with tireless repetition, but lacked the senses and adaptive intelligence to adjust for the specific and changeable rock conditions.

    The legend celebrates John Henry’s strength, but if the real advantage was the shaker’s adaptive skill, then the story takes on new meaning today, when almost all workers face the prospect of being John Henry facing a tireless machine designed to replace them. As a civilization we simply can’t compete to the point of death just to prove ourselves. This is one side of a huge social problem.

    The other side is, that AI deployment might focus on the raw strength and endurance of a John Henry and, in a few iterations, become an unbeatable replacement. But there is a danger these deployments might miss the need for a Shaker, leading to inefficient or dangerous deployment of technology. Alternatively, a machine could be deployed without an understanding or consideration of how humans had learned to interact safely in their respective roles. Behaviours passed between actual workers, perhaps even management are not aware of, keep our organizations running every day. A machine built to a formal metric might lack the finesse to understand how to operate alongside remaining human workers. We may face an era where machines are measurably superior on paper – faster, cheaper, more consistent – yet still fundamentally unprepared for the full complexity of the work.

    If we automate only the Driver’s hammer-swinging, we might eliminate the Shaker role entirely (losing crucial adaptive skill) or leave human Shakers working alongside an AI that doesn’t understand the rhythms and signals that kept the original team safe and productive.



  • The Radium Problem and the Chemicals Problem

    Two considerations when discussion AI deployment

    Two of the stumbling blocks with the general narrative of AI deployment come from misunderstandings, one from each polarized end of the current tensions.

    Firstly, what I call the “Radium Problem“. Shortly after Marie Curie isolated and described Radium around the start of the 20th century, products were devised and released to incorporate the new wonder substance. Advertisements for Radium Toothpaste, chocolate, cosmetics and more were not uncommon. Radioactive quackery – or even if viewed charitably – overenthusiasm, caused deployment of radioactive and highly dangerous consumer goods.

    The parallels with forced AI integration are horribly real. Technology which is not fit for purpose is being squeezed in to any product it can find to meet the promises made to investors. The danger to consumers is just as real, if less physically demonstrable. We are at the “radium in everything” stage of AI deployment.

    On the other end of a scale we can have what I dub the “Chemicals Problem“. Over the past few decades consumers have become increasingly alert to highly processed or synthetic compounds in food; forever chemicals, cost saving additives with uncertain long term risks or even documented dangers. This has become known by a shorthand, “chemicals in food”. When this shorthand reaches the ears of someone without the education or knowledge to underpin it, this can lead to the very confusing conversation where they refuse to accept any chemicals at all in food – and cannot make the distinction that in fact ALL food is chemical, even the good stuff.

    The word has become tainted, and I think this parallels a problem with AI. In the public mind AI just means LLMs, and dubiously or illegally sourced diffusion models, it only means the uncontrolled prompted mess which has made headlines in recent years. This has lead to people demanding, for example, no AI in computer games – they may have a good point about AI generated assets and vibe code, but not understanding that AI as a whole field includes a huge variety of approaches and techniques which they use every day, or might not register as AI. Spellchecks, search engines, flood fill, A* pathfinding, state machines… all now lumped together under the ‘AI slop’ slur, now a new ‘chemicals in food‘ situation where people refuse AI in Anything.

    I record these as observations without a solution, and to help guide myself through conversations which might seem confusing without this background understanding of other people’s preconceptions of the field.