Why Concerns Surrounding Artificial Intelligence Are Completely Justified
If you follow my content, you know that I frequently comment on and even interview guests regarding their views on Artificial Intelligence. It’s deeply fascinating to me on many levels.
I think it is most interesting philosophically as it inclines one in the direction of those most basic value-driven queries: Who am I? Where do I come from? What is my purpose? Am I a product of mindless, unguided evolutionary processes or is there something transcendent about me? What is consciousness?
It doesn’t take immensely powerful skills of observation to notice certain similarities between us and our AI productions, which can engender some insecurities echoed in those deep questions above.
So recently, I read and reviewed a book by a very prominent Scientist, Philosopher, and, yes…Theologian. I’m speaking, of course, about the emeritus Professor John Lennox at Oxford. He is one of the most recognizable Christian apologists in the media today. I should just mention for the record, that while I repeatedly agree with Lennox on many of his opinions, I am personally an Atheist. It has to be stated.
John Lennox has debated the likes of Christopher Hitches (as a matter of fact, he won both public debates with ‘Hitch’ — which is extremely noteworthy). He’s also gone head to head with Richard Dawkins, Lawrence Krauss, Peter Atkins, & Peter Singer — only to mention a few names.
Anyway, this book of his that I reviewed is called 2084. Today, I wanted to make a series of articles based on that work and sort of conduct a dialog with it. This is likely something you will see me do quite a bit of in the future.
So to begin, I think it’s important to map out the terrain in much the way Professor Lennox did in his first chapter. Why is AI Alarmism — or perhaps just a general interest even — completely justified. Well, I’m not going to sugar-coat it. Dr. Lennox takes a much bleaker and apocalyptic posture toward the subject than I do.
I very much respect his decades of wisdom and carefully honed reasoning skills which perhaps point him in this direction. He makes no secret of the influence bestselling works like Sapiens and Homo Deus by Yuval Noah Harari have played on his conclusions. He also makes extensive reference to George Orwell’s 1984 (of which the tribute 2084 is palpable), Aldous Huxley’s Brave New World, and last but not least, Dan Brown’s novel entitled, Origins.
These continuous allusions are poignant. As we will see throughout my discussions with this book, Sapiens is relevant as the acclaimed historian Yuval Harari more or less documents the evolutionary development of our species — which John Lennox feels go too far. Lennox, and I for that matter, are skeptical that mindless, unguided evolutionary processes account, in any definitive sense, for the emergence of biological life.
We agree in our espousal of what’s called micro-evolution, that is, the role genetic and environmental interactions have in producing obvious and demonstrable variation in species.
And much in line with his critique of Dan Brown’s Origins, the assertion that micro-evolution can ever explain the mathematical impossibility of emergent biological life is, simply-put, an article of faith. As we shall see, this worldview can have profound implications with regard to our use and development of Artificial Intelligence systems. How so?
Well, I think I summarized that in my book review elsewhere on Medium: “What if we are just biological robots? There might be nothing to work a nerve over here. But what if there is something more to our nature? What if we’re not the net consequence of mindless, unguided evolutionary processes? How can our moral and aesthetic complexities be programmed into a hunk of metal and silicone? See how powerful metaphysical assumptions can be!”
That’s really the problem here, isn’t it? If consciousness is replicable through AI, there’s really very little to be worried about. If it isn’t — and John Lennox raises some redoubtable objections to that effect — then AI may well be the author of our own undoing. Enter George Orwell and Aldous Huxley. Both of their fictional novels paint a dystopian picture of how AI — instead of being this lovely tool that can be harnessed to elevate humanity, a la Harari’s Homo Deus — could well instead become the proverbial ‘tail that wags the dog.’
So this question is very much the centrefold of Professor Lennox’s book. Are we capable of holding the reigns on this undeniably potent and useful servant? Or will what’s unleashed through this immensely potent medium end up being a merciless master?
Lennox even quotes the UK Astronomer Royal Lord Rees as saying the following — and it sums up the question perfectly. Rees opines, “We can have zero confidence that the dominant intelligences a few centuries hence will have any emotional resonance with us — even though they may have an algorithmic understanding of how we behaved.”
I also found Pope Francis’s comment in September of 2019 thought provoking. He warns, “If technological advancement became the cause of increasingly evident inequalities, it would not be true and real progress. If mankind’s so-called technological progress were to become an enemy of the common good, this would lead to an unfortunate regression to a form of barbarism dictated by the law of the strongest.”
“We can have zero confidence that the dominant intelligences a few centuries hence will have any emotional resonance with us — even though they may have an algorithmic understanding of how we behaved.” — Royal Lord Rees
As an Atheist, I’m tempted to reverse such an accusation against how The Catholic Church has not always focused its vast influence in the direction of the commonwealth and how there is no shortage of barbarism in their history, either. That’s beside the point, though. The Pope does make a valuable comment.
It might seem odd — as it did to me at first — to see people like the Pope and Professor Lennox remark on this subject. It’s not apparently relevant to their main objective. But Lennox makes a good argument in the first chapter. He writes, “The implications are such that it is important that, for instance, philosophers, ethicists, theologians, cultural commentators, novelist, and artists get involved in the wider debate. After all, you do not need to be a nuclear physicist or climatologist in order to discuss the impact of nuclear energy or climate change.” And I mean, how true is that!
It is precisely our unique perspectives that enrich the conversation. As discussed earlier, the implications of certain natural and supernaturalist opinions may cause one to see specific dangers or merits of a technology the other would be blind to.
“[I]t is important that, for instance, philosophers, ethicists, theologians, cultural commentators, novelist, and artists get involved in the wider debate.” — Prof. John Lennox
Okay, now most of us understand what Artificial Intelligence is, but this article is getting a little bit lengthy. In another discussion we will form a comprehensive list of what is meant by the acronym, A.I. But I want to conclude with a couple of comments from the author.
He raises two technical issues and one larger more pragmatic question. Let’s get into the technical issues of trying to produce Artificial life. Lennox writes,
“(1) Even if we knew the rules of human reasoning, how do we abstract from a physical situation to a more abstract formulation so that we can apply the general rules of reasoning? (2) How can a computer build up and hold an internal mental model of the real world? Think of how a blind person visualizes the world and reasons about it. Humans have the general purpose ability to visualize things and to reason about scenarios of objects and processes that exist only in our minds. This general purpose capability, which humans all have, is phenomenal; it is a key requirement for real intelligence, but it is fundamentally lacking in AI systems.”
“How can a computer build up and hold an internal mental model of the real world?” — Prof. John Lennox
These, I suppose, are just a couple of his distinctions between Artificial Intelligence and genuine intelligence or consciousness — something that can be said to have life, heart, soul, or mind. And so in conclusion to this conversation I will ask as Lennox does in his book, “How can an ethical dimension be built into an algorithm that is itself devoid of heart, soul, and mind?”
It’s a question to which I make zero pretence in knowing the answer. I don’t necessarily want to rule out the possibility that we could make something proto-conscious. But I tend to share the profound skepticism that Lennox details.
Anxiety is not only a justified reaction in light of the development and proliferation of AI systems (how they could threaten our privacy, liberties, and survival), but they cause us to further contemplate seriously on our origins, challenging many of the assumptions we take for granted in our materialist culture.