Back arrow
September 7, 2025

The Inevitable ... or is it?

AI has reformed the backbone of the internet becoming a ubiquitous mediator of digital activities
Psychohistory was the quintessence of sociology; it was the science of human behaviour reduced to mathematical equations. The individual human being is unpredictable, but the actions of human mobs, Seldon found, could be treated statistically (Isaac Azimov, Second Foundation).
Our new Constitution is now established, and has an appearance that promises permanency; but in this world nothing can be said to be certain, except death and taxes (Benjamin Franklin, in a letter to Jean-Baptiste Le Roy, 1789).

Last week I went to hear Karen Hao at the University of New South Wales Centre for Ideas.  Together with Toby Walsh, Mimi Zou and Joel Pearson they provided an informative and, I think, balanced perspective on the development of “Artificial Intelligence” as at September 2025.

Hao’s book “Empire of AI”  is well worth reading, mainly because it describes the human journey that has resulted in the technology arms race that we seem to be in at the moment.  What really stood out to me is that whilst the technology itself is not inevitable, the power struggles and personalities that drive it are.

One of homo sapiens key survival strategies as a species has been the ability to imagine different possible futures and pass on current knowledge to future generations through our collective intelligence and cumulative culture (Ella Al-Shamahi, BBC Human 2025).  We are innately social. Our ancestors constructed a regional microcosm of the current globalised world and mobility was the connective tissue because individuals who could rely on their social networks during disaster were more likely to survive and pass on their genes (Luke Kemp, Goliath’s Curse: The History and Future of Societal Collapse).

But, it seems, we are very slow to learn.

Which up until now has been fine but we are approaching a time in our evolution where the rate of change around us is becoming too fast for our traditional knowledge systems to mature and adapt. We may be approaching what mathematician John von Neumann called The Singularity.

The ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue (John von Neumann, 1950).

I wrote about what I feel is our inability to predict, let alone imagine, the future in 2022 where I talked about us approaching an Event Horizon beyond which we cannot see.

Since then it seems that the world has become even more unpredictable, the gap between those who have and those who have not is widening, and we are gripped by the rush to adopt new AI technologies as the solution to everything with minimal regard for their potential risks and harms despite numerous warnings from the technologists themselves (Pause AI Experiments Letter 2023).

This is what has inspired me to undertake the first ever Masters in Artificial Intelligence being offered by the University of Southampton as a way to order my thoughts through revisiting the work I’ve been doing for the past three decades whilst contextualising it within the current environment.

According to Karen Hao when Chat GPT burst on the global stage in November 2022 its creators didn’t expect its rapid adoption nor the scale of its impact.  They were unprepared for the race that they started and then got caught up in the hype and excitement which meant that they jettisoned many of the guardrails and precautions which needed to be in place. Whilst many were excited by the seemingly sudden advent of Large Language Models my immediate response was “Oh no! We are so not ready for this!”.

Not long beforehand Hannah Stewart and I had run one of our Smart Machine/ Brave Conversations workshops for a cohort of Founders and Coders within which we had been using various AI models. We showed them GPT 3.0 and I was surprised that many were largely ignorant of the technologies but when they used it they were shocked and asked “Will this mean I’m out of a job?”  My response was “most probably yes.” That was only 3 years ago.

Since then ChatGPT (GPT3.5) and it’s progeny have wowed everyone from coders to creators, CEOs to shop assistants and increasingly humans are integrating various aspects of‘artificial intelligence’ into their everyday lives.

For almost two decades people in the Web Science community have been observing and studying the development and adoption of ‘smart machines’ and their history is worth recounting because ChattyG and Groq didn’t just fall from the sky.

Creating Smart Machines

It’s always fun going to conferences with Professor Dame Wendy Hall because Wendy tells stories.  

Wendy’s work in this space began as a mathematician at the University of Southampton where she worked in Hypermedia which is foundational to the World Wide Web, enabling rich, interactive experiences and flexible navigation. The concept was invented by Ted Nelson who created Project Xanadu - Ted also invented the term ‘hypertext’ which is all about links - linking data, linking documents, linking ideas.

One of Wendy’s stories is of the 1991 Hypertext conference in Texas.  At the time Wendy and her team were developing their Microcosm hypermedia research and development project but at the conference whilst most people were more interested in the tequila fountain a little known researcher from CERN in Switzerland called Tim Berners-Lee was demonstrating a Xanadu inspired system called The World Wide Web on his NeXT computer.

“That will never fly” many said.  

We know that it did and probably for a couple of very simple reasons.  One was that whilst people like Ted and Wendy were trying to figure out the best business models for their systems Tim asked a very simple question:

“How Does the Web reach the Real World?”  

The answer was that he gave the Web away for "free” and relied on the Network Effect to distribute it.  

My hope was to create a system that would be fun and that everyone would describe their projects – there would be semantics.  I missed out on putting in the semantics - the Semantics were in the link types but they never really got used.  They allow machines to manipulate reality.  Semantics link the Web to the real world (Tim Berners-Lee at the First International Conference on the World Wide Web, CERN, 25 - 27 May 1994).

At the recent 2025 Web Conference Wendy told another story, this time about Tim and Ted.  

On the day of the Sixth Web Conference in Santa Clara Ted Nelson had just listed his Autodesk funding for Xanadu and he received a visit from Tim Berners-Lee at his boathouse. They had a fundamental difference of opinion - Ted sought to commercialise Xanadu by figuring out a commercial model of micropayments for people to be paid in a fair and sensible manner; whilst Tim insisted on giving the Web away for free for anyone to use and co-create without restriction.

If a scientist wants to meddle in politics and social implications he’s allowed to.  If you’re a hacker and you’re designing protocols and you don’t have a Sociology degree and you feel you shouldn’t be saying anything about the political and ethical issues don’t worry about it, go right ahead, because you’re defining a new society as well as a bunch of GIF files (Tim Berners-Lee speaking at the first Web Conference in 1994).

Ted’s perspective is rather different:

It is my job to design the documents of the future because if I don’t do it the techies will screw it up, and that is exactly what has happened as far as I’m concerned. (Ted Nelson and https://www.notion.com/blog/ted-nelson).

On a personal note as I listened to Tim Berners-Lee give the Keynote at the 2018 Web Science Conference in Amsterdam he was asked about pornography on the Web and his response was “just don’t look at it”.  

Both of these perspectives demonstrate elements of human personality which are naïve and arrogant, the consequences of which take time to be realised. Over the years many have lauded Tim for giving the Web away 'for free' but at the recent 50 Years of the Internet event at the Royal Society in London I heard people saying that in hindsight maybe that hadn't been such a good idea.

Which links me back to Karen Hao and her story.  There is nothing inevitable about how our technologies develop and evolve but there is something inevitable about human behaviour and how it manifests in the search for power, influence, resources, domination and legacy.

Web Science for Smarter Humans

One thing I have learned over the years is that pretty much anyone who has contributed anything to the modern information world has read their Azimov.  When Web Science was born in 2006 Tim Berners-Lee suggested calling it PsychoHistory because

Pychohistory seeks to predict possible futures in order to intervene for the most positive (to humanity) outcomes.

Probably good that they didn't because, as Hari Seldon found with the Mule, intervening is a really difficult thing to do.

Instead

Web Science seeks understand how the Web changes society and how society changes the Web. it studies the theory and practice of the “Social Machine” of which the Web is the ultimate example (Web Science Manifesto).

Probably it's greatest challenge ha been Human Tribalism (Scientific American, “The Tribal Instinct” (2017) which manifests in separate departments, separate conferences with separate communities and disciplines.  Ted and Tim exemplified this in their approaches and what Karen Hao now calls the AI Boomers and AI Doomers.

Some communities believe that AI may just be another ‘normal’ technology, some think it will bring about Utopia after a somewhat bumpy road (Mo Gawdat and Roman Yampolskiy and meanwhile the Tech Bros are in a “race to the bottom” just to satisfy themselves and their investors.

Some Governments seem to have a clear plan of where they want to go (China’s recent AI Strategy and the US Plan with it's desire for AI Domination. Other governments are playing with fire through dangerous ill-informed and reckless alliances (UK and OpenAI). Some, like Australia, cannot see beyond short term productivity.  

One thing we do know is that as our machines are getting smarter there is the potential that we are getting dumber (for examples see medicine, education, creativity).

We are offloading more and more complex sets of processes leading to what some believe is “cognitive miserliness” which means that AI-reliant individuals find it harder to think critically.  (The Economist)

 So how do we approach this new world?

Naomi Alderman talks about The Third Information Crisis and believes that we as humans need to develop the cognitive and emotional tools to deal with the new technological revolution and that the real benefit of this information age is to help human brains work together.

We need to create “an expanding circle of us” because “machine learning is stored human thinking”.

I really like this analogy and whilst the Tech Bros believe they have it all under control the reality is that they don’t and I for one don't want such a narrow group of people to have anywhere near much power.

What we need to do is to prepare as best we can for what is coming which was the focus of the UNSW event. What can we all do to prepare?

One really positive observation is that whilst a country like Australia cannot hope to compete in the technology development what we can do is harness our cultural values of and “A Fair Go, Human Rights, and Rule of Law” to focus on the human side through developing Change Models and working with partner cultures such as the EU and some Asian countries to push for a Third Way (see https://www.profjoelpearson.com/artificial-intelligence).

The other thing we are seeing is that the younger generation are beginning to push back and become more discerning about swallowing the hype - they are beginning to question the need for new kit through asking more insightful and thought provoking questions; they are beginning to imagine a world that is not what the Tech Wizards are presenting. And they are the market of tomorrow.

One thing is certain - the promise of Smart Machines is exciting and a game changer for humanity.

With the Hypertext Conference in Austin came the beginning of a revolution in how we live ... but we are definitely not in Kansas anymore ...

Some useful references:

The Seismic Report - https://report2025.seismic.org/

Tomorrow's AI - https://www.tomorrows-ai.org/

Web Archives - https://iw3c2.org/

All the Web Conferences - https://archives.iw3c2.org/

Proceedings of ACM Web Conferences - https://dl.acm.org/doi/proceedings/10.1145/3696410

The 1991 Hypertext Conference Trip Report, San Antonio, Texas, December 1991 - Lynda Hardman - https://dlnext.acm.org/doi/pdf/10.1145/134421.134438

An Analysis of Xanadu, Hypertext 1991 - https://dlnext.acm.org/doi/pdf/10.1145/122974.122979

Tim Berners-Lee: The Future of the Web (WWW94) - https://videos.cern.ch/record/2671957

Ted Nelson Xanadu

https://xanadu.com/zigzag/

https://www.xanadu.com.au/ted/

Xanadu - https://www.youtube.com/watch?v=JN1IBkAcJ1E

http://ted.hyperland.com/

We are definitely not in Kansas any more
Creative commons CC BY-NC-SA

Creative Commons CC BY-NC-SA: This license allows reusers to distribute, remix, adapt, and build upon the material in any medium or format for noncommercial purposes only, and only so long as attribution is given to the creator. If you remix, adapt, or build upon the material, you must license the modified material under identical terms.

CC BY-NC-SA includes the following elements:

BY

Creative commons BY

– Credit must be given to the creator

NC

Creative commons NC

– Only noncommercial uses of the work are permitted

SA

Creative commons SA

- Adaptations must be shared under the same terms