top of page

What do Isaac Newton and OpenAI's Sam Altman have in common?


Prickly Geniuses and Civilisation-Shaping Tech:

Newton, Altman, and the Future of AI


Newton was notoriously combative, paranoid, and ruthless with rivals (Leibniz, Hooke, Flamsteed…), but he still laid the foundations of classical physics. History tends to judge thinkers by the strength of their work, not their bedside manner.


History shows that great leaps forward are not always driven by loveable brainiacs.. 
History shows that great leaps forward are not always driven by loveable brainiacs.. 

Tucker Carlson’s recent interview with Sam Altman, CEO of OpenAI, has stirred up exactly the kind of conversation we need right now.


Carlson pressed on the big questions:

  • Who decides the moral framework for AI?

  • What happens when people ask it life-and-death questions?

  • Are there any real guardrails around technology already transforming our lives?


Altman’s answers were calm and polished — but left many uneasy.


Journalist Glenn Greenwald called him an “anti-communicator”, someone who can make even a safe answer sound evasive.


This isn’t about Altman being inarticulate; it’s that his guardedness makes listeners suspect there’s more going on than he’s saying.



When the man steering a civilisation-shaping technology cannot inspire trust, suspicion and anxiety follow. But is this necessarily a bad thing?



Isaac Newton was famously prickly, paranoid, and ruthless toward rivals. None of that stopped him from rewriting physics and laying the groundwork for the modern world. The real question isn’t whether Altman is personable — it’s whether our systems are strong enough to keep AI development safe, no matter who is at the helm.



We may be entering what could be called an AI Newtonian Moment — a point where society moves from shock and disruption to structured integration, but only after painful adjustment:


  • Paradigm shift: Just as Newton’s Principia reframed physics, AGI (or near-AGI) would reframe economics, science, and governance.

  • Institutional lag: Society will be caught flat-footed, scrambling to update laws, ethics, and norms.

  • Power centralisation: Early adopters — big tech and nation-states — will hold disproportionate control, much as early mechanists dominated navigation, war, and trade.

  • Standardisation phase: Over time, rules, education, and institutions will stabilise, creating an “AI-literate” society that integrates these tools safely.

  • Long-term effect: Like Newtonian mechanics fuelled the Industrial Revolution, mature AI could underpin a new economic and scientific era — if managed well.


Some sceptics may argue this concern reflects hype more than imminent danger — that Altman, OpenAI, and other labs already publish safety reports, hire ethicists, and speak often about alignment.


That may be true; still, when a technology moves this fast, the risk is that regulation, oversight, and public debate lag far behind. And in such lags lie the greatest dangers.



The unease Carlson’s interview provoked is therefore not a problem but a signal — a sign that society is starting to wake up.


The challenge is to turn that anxiety into action: demanding transparency, clear governance, and open debate before the future of AI is set in stone.




Greenwald is voicing what many feel but struggle to articulate: AI feels like it’s being unleashed too fast, by too few, with too little democratic input. His critique of Altman’s communication is fair — Altman tends to sound calm and rational, but he often comes off as detached or evasive, which only heightens public anxiety and suspicion.


I’d be cautious against personalising this too much. Isaac Newton was anything but a likeable guy. Whether Altman is likeable or not is irrelevant — the real issue is governance, transparency, and accountability. If those are robust, the individual personality at the top matters less.



Comments


bottom of page