Welcome back to Wulfhūs, Seeker! Join me as we delve into the phenomenon of Technological Singularity in a rapidly evolving world. As is my way, please see the following questions we will examine in this post:

  • What is Technological Singularity?
  • Where did the idea of technological singularity come from?
  • What are a few of the pros and cons of a world driven by technological singularity, and why is it noteworthy to examine?
  • What are a few different examples of technological singularity scenarios?

There is much to cover, so without further adieu, let’s begin.

What is Technological Singularity, and where did the idea come from?

The idea of technological singularity was first pioneered by Hungarian Mathmetician John Von Neumann in the 1950s. The alternative term Singularity first became most popular in 1983 by American science-fiction author and professor Vernor Vinge – becoming more popular with an Essay he wrote in 1993: The Coming Technological Singularity (Wikipedia.com).

Technological Singularity, or, Singularity, is defined as: a theoretical future event at which computer intelligence surpasses that of humans (Builtin.com). In a mathematical context, it refers to a point that isn’t well defined and of which behaves unexpectedly. This mathematical abstration implies the fact that, it is impossible to know what to expect from a point-in-time our world is run by Artificial Intelligence (AI) – the only plausible implications of such a world being that it is to be unexpected.

Before we jump into pros and cons of such a conceptual world, I’d like to take a moment to discuss a concept known as the Kardashev Scale.

First outlined in an essay: Transmission of Information by Extraterrestrial Civilisations in 1964 by Soviet Astronomer Nikolai Kardashev, the Kardashev Scale offers a way to measure a civilisation’s technological advancement. Earth is barely into Level 0 of this scale, which, originally, had three levels:

  • Type I: A civilisation can harness all of the energy available on its world and can store it for consumption, hypothetically, can harness energy from natural events (volcanoes, earthquakes, etc).
  • Type II: A civilisation can harness all of the energy available from its star and can store it for consumption.
  • Type III: A civilisation can harness all of the energy available from its galaxy (to include black holes, every star, etc) and store it for consumption.
  • Type IV: A civilisation can harness all energy available comparable to levels in the entire universe and store it for consumption.

Since the concept was first developed in 1964, various individuals have attempted to find a way to re-vamp the concept so Earth could make it to the scale. And, additionally, in 2011, American Physicist Michio Kaku proposed the possibility of a Type IV civilisation in his book Physics of the Future, which also examined his speculations of future technology by 2100 – developments in Medicine, Computing, Artificial Intelligence, Nanotechnology, and energy consumption.

So where does Technological Singularity relate to the Kardashev Scale?

A valid question – my take – technological singularity would become prevalent at best capacity and efficiency come the beginning of Type I on the Kardashev Scale, and AI would assist humanity with harnessing what is needed to efficiently work at such a capacity. Given – that would be the means to advancement and a positive outcome.

But what would be some of the pros and cons or ethical concerns encompassing such an undetermined point in our future?

One of the biggest concerns of Singularity lies in its ethical perception. AI being incorporated into such a point in the future may serve more harm than good – at least according to what I have learned from speaking with others about the idea, as well as what I have discovered online about it. The other big concern that is common is if humanity is even ready for such a change. The aspect of it being unethical lies in – Singularity would mean AI is advantageously ahead of human intellect, causing potential harm to human rights. That would be a worst-case scenario. But, who’s to say that despite AI being advanced past human intellect, human rights wouldn’t be taken into consideration? There’s always a possibility that AI in a technologically advanced “Singularity” society wouldn’t be enemies of humans and their rights. That would be the hope, at least. These thoughts just address a few of the pros and cons of a society of Singularity, the lost is endless – mostly due to the simple fact of the matter: we do not know what we do not know, there is much yet to be learned about such a phenomenon.

Now where does society stand in terms of advancements in S.T.E.M as it relates to a point of Singularity?

I recently came across an article Singularity is Nearer discussing a prediction scientist Ray Kurzweil stated in a book he wrote and then published in July. The article summarizes what the prediction is, and the two hot topics mentioned in the article are the idea of Nanobots and AI integration into our daily lives. The advancements in AI have become more and more prevalent as time has gone on, and it seems we are indeed drawing closer to a level of AI integration in which AI becomes an even larger part of our daily lives. It is also well-noticed that AI is becoming more and more frequent – a few prime examples nowadays being ChatGPT, MetaAI, Microsoft Copilot, and Gemini. There are several other noteworthy AI software out there, but these are the five I find myself using the most daily. On the matter of nanobots, or, nanorobotics, I came across another writer’s blog found here describing what nanorobotics is, and what kinds of emerging trends are prevalent nowadays in such a field of study.

The aforementioned topics mentioned are two large examples of what technological singularity may look like to us in the future, though other factors are at play as well. I’d like to close this point with a quote from Forbes, 2020:

“For more than 250 years, the fundamental drivers of economic growth have been technological innovations. The most important of these are what economists call general-purpose technologies — a category that includes the steam engine, electricity, and the internal combustion engine. The most important general-purpose technology of our era is artificial intelligence, particularly machine learning.”

– Erik Brynjolfsson and Andrew McAfee, 2018

In conclusion, the phenomenon of technological singularity is a dense, controversial topic within the realm of S.T.E.M. – not only is humanity inexperienced with the concept of this phenomenon, but this concept does indeed present a slew of ethical and practical concerns.

One of the most recognized fears the human brain presents to our consciousness is the fear of the unknown. Because a technological singularity is still such a new concept, and because we know so little of the implications of such an unknown point in the future, we are in our right minds to be skeptical and fearful of such an advancement in technology. And, additionally, it is fair to be concerned of what could come to be should such advanced technology get into the wrong hands.

I, myself, am open-minded and look forward to what the future has in store – even in a singularity-fueled world, though remain medially skeptical because of how little is yet to be discovered, understood, and comprehended about such a phenomenon.

Thanks for coming by once more, and feel free to engage in discussion about the phenomenon in the comments below.

Until next time,

– E.K.

The Wandering Wolf