Artificial Intelligence: The Great AI Debate!…

colin byrne
8 min readNov 24, 2023

--

Early Humans sitting around a Fire looking to the heavens!
Created with perchance.org, AI Photo Generator, “Caveman looking to the stars”

Quote: Nikola Tesla.

“The spread of civilization may be likened to a fire; first, a feeble spark, next a flickering flame, then a mighty blaze, ever increasing in speed and power”.

The Great AI Debate “Artificial Intelligence: The Spark Igniting a Creative Revolution, or The Bonfire of our Humanities?”

Harnessing the potential of fire empowered early humans to make massive evolutionary leaps. Luckily for us, fire has an inherent built-in learning mechanism, teaching our ancestors plenty of painful lessons with many burned fingers along the way.

Playing with fire initially followed a trial & error approach before evolving to a more sophisticated reasoning, such as the weighing of risk vs. reward.

When you think about it, the story of fire and Artificial Intelligence (A.I.) share similarities, e.g

  • Learning by mistakes and rewarding positive outcomes are examples of core AI training techniques.

Social discourse is currently full of commentary about AI, especially on the risks its growing presence has in our lives. It tends to be a polarizing debate, with some championing and advocating for unbridled AI adoption, while others believe AI threatens our very existence.

With so many people feeling qualified to weigh in on the topic and the hype cycle around AI just kicking into gear, the “Great AI Debate” might be enlightened by drawing parallels to our past, to a previous revolutionary technology called fire.

Side note!

“Hey Siri, Hey Google, Hey Alexa: Did you know the term AI has been around since the 1950’s and didn’t just drop in November 2022 with the release of OpenAI’s Chat GPT-4 ?”

Caveman on fire runs screaming for help.
Image from The New Yorker, Artist Pia Guerra.

A…i.. Brief History of Fire

The following scenarios depicting the harnessing of fire were generated by a Gen-AI assistant.

Several contrasting (pros & cons) images depicting Fire and its evolution.
Created with perchance.org, AI Photo Generator “Various Fire Scenarios”

What thoughts do these contrasting images above provoke ?

e.g.

  • Electricity generation vs. Fossil fuel effects on global climate!

Its not always easy to distinguish the positive from the negatives, especially when outcomes may not be immediately obvious.

Hindsight is 20/20 Vision!

Whether harnessing the power of fire or AI, the Law of Unintended Consequences apply.

Quote: Terminator movie.

  • “A Skynet funding bill is passed in the United States Congress, and the system goes online on August 4, 1997, removing human decisions from strategic defense.”

A far-fetched scenario ?

  • 2026 Your organisations recruitment function is handed over to an AI solution. 2028 You look up from your desk only to find all your office colleagues share the same, gender & ethnic background.
  • Biased datasets pose particular challenges for AI models!!!
    – “Biased dataset Biased training Biased reasoningBiased outcomes !”.

The point here is that the application of technology may result in both positive and negative consequences, e.g. The same AI algorithm maybe used to;

  • identify & classify cancer cells in medical imagery, or
  • identify & target tanks on a battle field,

hence AI is often cast as the proverbial “Double-edge sword!”.

Risk Perception.

Over time we have built up a vocabulary to articulate the risks from fire with such terms as,

  • wildfire, firestorm, inferno, firebrand, firetrap etc.

and risk mitigation with terms such as,

  • fire extinguisher, fire drill, firewall, fire break etc.

Bubbling into our current lexicon from the rich stew of brewing AI technologies terms like,

  • bias, deepfake, copyright, plagiarism, hallucinations, explainability, job displacement, disinformation,

are coming to define the current perceived risks associated with AI.

As we embed more AI technology into our daily tasks both the potential for” and the range of” risks also increases. Anomalies in AI solutions will inevitably impact all our lives to a greater or lesser extent e.g

  • Erroneous AI recommendations for; movies, employee selection, or medical diagnostics, each with vastly different ramifications!

What we don’t hear from the great AI debate however is mention of less obvious latent risks, such as the potential for complacency & over reliance on this technology.

Too much of a good thing?

“Hey AI Assistant: Can you go to work for me today ?”

Imbuing AI solutions with humanities knowledge(data) assimilated over millennia of learning is truly an amazing feat of human ingenuity. Pooling our collective knowledge into AI models presents enormous positive potential, helping us navigate our daily lives, e.g,

  • Spell checker, next word prediction, spam detection, sentiment analysis, photo enhancement, photo augmentation, speech analysis, language translation, mathematics assistance, puzzle reasoning, recommend-er assistant,object detection, home automation, navigation; (map reading, route selection), voice/facial recognition, autonomous robots; (vacuum cleaners, lawn mowers, restaurant assistance), autonomous vehicles, creativity; ( read / write / dictate text, write scripts, music generation, image generation, video generation, write computer code) etc, etc

Sure individually these AI tools help us to “navigate our daily tasks”, but what about their combined use over time in “managing our lives”?

Did you ever have a frustrating/bad technology day? e.g;

  • Misplaced smart phone
  • Google Maps very slow to update
  • Spell/grammar checker fails

How did the lack of AI technology affect your day?

A family of cave people are sitting in their cave in front of an unlit stack of logs. Another cave person is standing in the cave way
Image from The New Yorker, Artist David Sipress.

Will a seemingly benign incremental adoption of various AI technologies cumulatively decrease our capacity to think for ourselves?

On the one hand, the application of these omniscient God-like AI models can artificially elevate our “intellectual confidence” but on the other hand, could they potentially “diminish” our abilities for self reasoning in the long term?

In fairness its not purely an AI issue rather technology in general, but there’s little doubting the potential of AI to increase the volume and velocity of these latent risks especially given the advent of Generative AI (Gen-AI) technologies.

Side note!

“Did you know the term Latent is associated with a foundation principal for many Gen-AI models, with Latent space¹ and Latent variables being leveraged to generate specific contextual content.?”

Safeguards.

Humans are inherently curious creatures, so we can be forgiven for acting like the proverbial “Moths to the flame” when it comes to new technology.

Attempting to protect ourselves from ourselves historically we’ve put in place individual & societal safeguards e.g. Laws governing the use of,

  • Fire, weapons, medicines, chemicals, finance, energy, aviation etc…

Legislating for the use of technology is extremely challenging though, especially in the field of AI where laws are often outpaced by new innovations.

  • EU AI Act²: Laws aimed to regulate AI systems + voluntary code of conduct, for Europe.
  • USA AI Executive order³: Safe, secure, and trustworthy development & use of AI for US federal agencies. A notable feature is the requirement to designate Chief AI Officers (CAIO) mandated to modernize processes with AI, ensuring that AI is used with ethics and governance in mind, and in building an AI-first culture.

It will be interesting to see how the CAIO function plays out in the private sector, the balancing act between Championing AI, and Governing AI. i.e

  • Fire Stoker VS. Fire Marshall!.

e.g. Different institutional policies for the use of Gen-AI tools, some are embracing it while others are slow to adopt due to fear of misuse & copyright implications.

Side Note: Amaras Law

“States that people tend to overestimate the short-term impact of new technologies while underestimating their long-term effects. Its a reminder to consider the broader context and timeline when predicting technological advancements.”

Two Cavemen asking Google what is fire! Over reliance on technology ?
“Becoming too dependent on technology?”, Artist: Eilís Byrne (Gen Z)

Concluding Comments.

Quote: (Descartes, 1637)

The phrase “I think therefore I am”, by René Descarte, implies that thought is the defining human characteristic. The advent of writing allowed early societies to collaborate and share these thoughts. Humanity and the society we live in today is a manifestation of these collaborations and actions.

In the same way as the pen enabled humans to effectively share their thoughts, AI and in particular Gen-AI imbued with the corpus of this knowledge offers heretofore unseen risks and rewards.

In the 1970-80’s our parents and teachers warned of the risks Calculators posed, fearing they would diminish individuals mental arithmetic capabilities.

The Daily News read: “Math Teachers Protest Against Calculator Use” by Jill Lawrence
  • There is no doubting the benefits these devices have brought to our society, but at the same time, were these concerns justified?

Does Our Collective Intelligence burning brightly inside these amazing AI models, enlighten humanities current generations (Gen X,Gen Z etc), while casting a pervasive shadow of “over confidence/reliance” upon succeeding ones? (Gen Alpha, Gen A+i)

It can be easy to anthropomorphize these AI systems, treating them as if they have minds of their own, but in reality they possess our collective knowledge. I’m sanguine about the prospects for AI and how society will adapt to master its use, hopefully without too many burned fingers!

I don’t foresee humanity being subjugated by robotic overlords anytime soon, but we need to strike a balance when ceding autonomy to AI and consider the risks and rewards with ample perspective!.

What will our humanity and society of tomorrow look like, after decades of AI assistance?

Hey Reader:

The Great AI Debate!…

That’s what I think!

More importantly, what do you think?

Interested, to see what Gen-AI thinks of this article?

Click here for AI generated Audio Pod Cast with two pod casters discussing topics raised in this article.

Click here for AI generated Text review of this article.

Final Note:

For people worried about the future and how AI may adversely affect their lives and livelihoods, I offer this anecdote,

From the 1960’s, and NASA’s perspective on Humans and space flight.

“Humans are the lowest-cost,150-pound, all-purpose computer system, which can be mass produced by unskilled labour”

Notes & Citations.

About the Author:

Colin Byrne is an IT professional with over 25 years industry experience, BSc grad (2000) in “Computing Science” from University of Ulster & MSc. post grad (2020) in “Artificial Intelligence”, from the University of Limerick.

References:

[1] 2023 “Generative models and their latent space”, Available at: https://shorturl.at/koK09, Andrea Asperti

[2] 2023 “What is the EU Artificial Intelligence Act?” Available at: https://shorturl.at/mvCQT , European Movement Ireland.

[3] 2023 “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”, Available at: https://shorturl.at/noKW1, WhiteHouse.Gov

[4] 2024 “The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work”, Available at: https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit.html

[5] 2022 “Amaras Law”, Available at: https://thevirtulab.com/what-is-amaras-law/ , thevirtulab.com

Checkout another of my Articles: Ethical AI, It’s Personal!

--

--

colin byrne

IT professional, MSc. post grad in Artificial Intelligence.