You can quote several words to match them as a full term:
"some text to search"
otherwise, the single words will be understood as distinct search terms.
ANY of the entered words would match

Dr. John Lennox discusses AI, Yuval Harari and possibly the most dangerous of them all – longtermism


Dr. John Lennox discusses AI, Yuval Harari and possibly the most dangerous of them all – longtermism

John Lennox, a renowned Oxford Mathematician and author of ‘2084: Artificial Intelligence and the Future of Humanity’, joined Larry Taunton. They discussed artificial intelligence (“AI”) and its ethical implications. Dr. Lennox stressed that even in its current state, AI presents pressing ethical concerns, particularly in privacy, surveillance and disinformation.

He underscores the alarming growth of deepfake technology, which can manipulate audio and video to convincingly impersonate people, leading to the spread of false information. This, he argued, poses a substantial challenge in discerning authenticity in public discourse, especially within political contexts.

Let’s not lose touch…Your Government and Big Tech are actively trying to censor the information reported by The Exposé to serve their own needs. Subscribe now to make sure you receive the latest uncensored news in your inbox…

What is AI?

The easy way to look at it is there are essentially two very different strands, Dr. Lennox explained.

The first is narrow artificial intelligence (“AI”).  All forms of modern AI systems can be classified as narrow AI.  “[It’s] the AI that’s up and running today and we’re very familiar with it,” Dr. Lennox said.  The second is artificial general intelligence (“AGI”).

“The word ‘narrow’ refers to the fact that the narrow AI system is a system that does one and only one thing that normally requires human intelligence. It’s not intelligent itself it’s simply a computer and a database and an algorithm for filtering certain things out of that database,” he told Taunton.  And emphasised that the word “artificial” in the phrase “artificial intelligence” is real. It’s not real intelligence, it only simulates intelligence.

“That’s very important,” Dr. Lennox said, “because that brings you back to [Alan] Turing, who’s one of the fathers of this thing, and he talked about ‘The Imitation Game’ – we’re imitating intelligence.”

Alan Turing’s Imitation Game, commonly known as the ‘Turing Test’, is fundamental to the science of AI. Turing pioneered machine learning during the 1940s and 1950s. He introduced the test in his 1950 paper called ‘Computing Machinery and Intelligence’ while at the University of Manchester. In his paper, Turing proposed a twist on what is called ‘The Imitation Game’.

Further reading: Turing Test, Tech Target

AGI is where the science fiction tends to come in very rapidly. Dr. Lennox explained.  “[It] is the idea that we can produce some sort of system, technological system, that can do everything and more than a human intelligence can do.  And, that’s where we get into the realm of talking about super intelligence and so on.”

“They’re trying to do this in two ways. Firstly, either to take existing human intelligence and enhance it in different ways using, for example, drugs or cybernetic technologies that implant all kinds of things into the human brain. Or [secondly], there’s a different line of research which is trying to create some kind of super intelligence from scratch because of the problem of biology [that] living things degenerate and they die.”

Moral and Ethical Difficulties

What we discovered, Dr. Lennox explained, is that even with AI that’s operating at the moment there are great pluses but there are also negatives. The negatives are the ethical and moral issues.

“Narrow AI is what’s used in facial recognition which can pick out a criminal face in a football crowd or can be used to suppress a minority ethnic community in a part of the world,” he said.

Disinformation and Influencing People

Another example of narrow AI is predictive text on our phones and the sophisticated version of that is ChatGPT.

“Many people are finding [ChatGPT] very useful, for example, [in] researching a new topic,” Dr. Lennox said. Adding a warning that you always need to check what ChatGPT produces because it can make things up and it can tell you things that are not true. “You need always to check it,” he said.

“The trouble is it’s also going to be enormously, [and] already is being, used for spreading disinformation and influencing people – way beyond what they realise. And that’s because it only knows what it’s told.” By that, Dr. Lennox means it has been programmed to give particular responses to particular things.  “It doesn’t know anything, it has no sense that we have of knowledge,” he said.

AI is not the development of consciousness as some people may imply.  “There’s a real problem with language,” Dr. Lennox explained.  “Even the very words ‘artificial intelligence’, ‘deep learning’, all this kind of thing give you the impression, and give the general public very much the impression, that we’re talking about some conscious entity when it is no such thing.”  He re-emphasised that the word “artificial” in “artificial intelligence” is real, it really is artificial.

Harvesting Information

The primary ethical problems of AI vary because AI is in virtually every aspect of our lives, Dr. Lennox said.  One ethical problem is the harvesting of our information.

He used the example of when we make a purchase on Amazon.  “What many of us don’t realise is that the information that we give voluntarily is being harvested,” he said.

In her book, ‘The Age of Surveillance Capitalism’, Shoshana Zuboff makes the point, and it’s a serious point, that “this is a multi-million if not billion-dollar industry [when] these companies that are harvesting the information are sending your information on to third parties without your permission,” he said.

Further reading: High tech is watching you, The Harvard Gazette, 4 March 2019


“Then there are the ethical problems of what I call surveillance communism,” Dr. Lennox said.

“The kind of thing you find among the Uyghurs in China where facial recognition technologies, that are rightly used by the police in some of our countries to catch criminals, for which we’re thankful, are being used to affect a surveillance network of which George Orwell would have been proud – in the sense that it really is 1984 plus plus plus – and the weaker population is, [it’s] unbelievable the way in which they have been suppressed and the information is being uploaded and so on.”

In China, facial recognition technology is not just simply to identify specific faces. The technology in China is being used to weed out the Uyghur population. It has been designed to identify people of that ethnic group or potential people of that group. So, people will be walking around Beijing and the facial recognition tags someone who might be a Uyghur.  The police will then follow it up.  It’s a sinister use of AI and facial recognition to suppress and tyrannise a whole ethnic group if not the entire population.

But it’s more than that, Dr. Lennox noted.  It’s used for general social control.

“The Chinese, I understand, have a social credit system. And we’re beginning to get that kind of thing in the west … The number of CCTV cameras in China is just mind-boggling – one for every two or three people – and you’re on camera all the time and they’re watching.  And if they see some little misdemeanour – like throwing trash on the roadway or having a conversation with a foreigner or something that goes as a black mark against you and, in the end, that adds up – and you can find that you can’t get into your favourite restaurant or you can’t book a holiday, you can’t buy a new car.”

Further reading:

WhatsApp, Dr. Lennox warned, is an example of an intelligence-gathering application. “They haven’t created [WhatsApp] … to facilitate your conversation globally, it is to train artificial intelligence and to capture data personal information about you.”

If I have nothing to hide, why should I be worried?

Many people will not be concerned about the creeping surveillance and monitoring of our everyday lives using AI.  “That assumes that the people that are watching you are friendly kind benign and benevolent,” Dr. Lennox said. But if you are under a totalitarian regime, you’re in a very different situation.

“[You might] say I have nothing to hide because you’re morally upright but as far as the principles and regulations and ideology of the government is concerned, you might be very much a suspicious person,” he said.  “It’s a very superficial response to say ‘I have nothing to hide’.”

Not long ago, Dr. Lennox discussed AI with one of the leading people in the field. He suggested to Dr. Lennox that research should be paused, that ChatGPT5 shouldn’t be developed, for example, that we should pause for six months.  When asked why, the issue that they raised was what they call the “control problem.”

“I think it was Paula Boddington … [who] made a very perceptive remark and said: ‘You know the problem with the original creation was that God lost control of the creatures he’d made in his own image and we’re like liable to do the very same thing’. And that’s what’s scaring people. It’s the control problem. They don’t understand, at least they say they don’t understand, what’s going on in even the kind of AI that’s involved in ChatGPT4,” Dr. Lennox said.

Taunton mentioned that he had attended the World Economic Forum meeting earlier this year and had a conversation with one of the presenters.  The presenter, Taunton said, was an expert on AI. He told Taunton that while governments express their concerns about artificial intelligence, what they really mean is their concerns about what other governments are doing because they are not putting the brakes on artificial intelligence at all.

The presenter went on to tell Taunton: “No one seems to really understand what they’re creating.  The people who are paying for it, the politicians who are behind it are the ones who understand the least [about] what artificial intelligence is.”

“And then he went on to tell me,” Taunton said, “about, this was not his description of it but it’s mine, a kind of terminator-like warrior bot that he began to describe to me. And he said: ‘I am watching the development of these robots’.  And he said: ‘When these things hunt you, they kill you’. And he said: ‘When they shoot at you, they never miss.  And he said: ‘This is the future of warfare but he says it’s also the future of surveillance’.”

Thoughts About Yuval Harari

Firstly, Harari is not a scientist, he’s a historian.  “He concerns me, Dr. Lennox said, “because of his very widespread influence.  He really is an influencer … What he actually has to say concerns me more because of its inaccuracy and his reading of history seems, to me, to be very strange.”

In Harari’s second book, ‘Homo Deus: A Brief History of Tomorrow., “Homo Deus” refers to “the god man” or “the man who is god.”

In this book, Dr. Lennox explained, “Harari says that there are two major agenda items for the 21st century.  The first is to solve the technological problem of human death. He regards it as a technical problem and a technical problem with technical solutions. And then secondly, to enhance human happiness.”

“The idea of solving the problem of death is a very ancient one,” he added.

“[On] enhancing human happiness his target is, and this is more or less a quote, ‘is to turn Homo sapiens into Homo Deus’.  In other words, turn humans into gods.”

“This is a very ancient thing.  And once I noticed that, I felt right we’re back in the early chapters of Genesis [of the Bible].  And that’s hugely important culturally because the idea of humans becoming god has permeated history.  And it’s been behind many dictatorships in ancient Babylon, Syria, Egypt and even way right into modern times we find people essentially playing god.  And Harari is encouraging people to think that they can become gods by simply solving a technical problem and then possibly upgrading themselves.”

“Now as a Christian, I really have got something to say about that,”  Dr. Lennox said.

“When people tell me this, as they do, I say: ‘You’re too late’.

“And they look at me and say: ‘What do you mean we’re too late?  We haven’t got there yet’.

“’Oh’, I say, ‘Yes you haven’t got there yet but you’re too late’.

“’Why am I too late?

“’Well’, I say, ‘the problem of physical death was solved 20 centuries ago when Jesus Christ was raised by the power of God from the dead. And as for enhancing human happiness and uploading ourselves into some higher form, that’s also been solved because’ – and this gives me a real opportunity to explain Christianity against the background of the promises that the transhumanists like Harari are offering – I say ‘look it would be good for you, therefore, to listen to Christianity which is, actually, a lot more evidence-based than Harari’s transhumanist promises when it tells us that everybody who faces the mess that they’ve made of their own lives, and perhaps those of others’.

“And I simply point out that there’s no road to Utopia that can bypass the problem of human moral failure and sin, and that’s the mistake that most dictators have made.  But if they’re prepared to repent of that and face it and trust Christ as Savior and Lord, then He promises them that He’ll raise them from the dead and upload them into the world to come.”

Longtermism is the Most Dangerous Worldview

Dr. Lennox doesn’t address longtermism in his book but it is an idea that scares him. A lot of it emanates from the University of Oxford.

Over the past two decades, a small group of theorists mostly based in Oxford have been busy working out the details of a new moral worldview called longtermism, which emphasises how our actions affect the very long-term future of the universe – thousands, millions, billions, and even trillions of years from now. The term “longtermism” was coined in around 2017 by Oxford philosophers William MacAskill and Toby Ord.  But longtermism has its roots in the work of Nick Bostrom, who founded the grandiosely named Future of Humanity Institute (“FHI”) in 2005, and Nick Beckstead, a research associate at FHI and a programme officer at Open Philanthropy.

Further reading: Four technocrats are pursuing projects that are an existential threat to the world. The Exposé, 3 September 2023

Longtermism is one of the main “cause areas” of the so-called “effective altruism” movement, which was introduced by Ord in around 2011 and in 2021 boasted of having a mind-boggling $46 billion in committed funding.

Summarising the longtermism ideology, Dr. Lennox said: “We would love the world to be a better place for our children and our grandchildren and that’s the root idea of longtermism. But now come the transhumanists and say:

“‘We’re looking forward to the day when we can create beings – possibly cyborgs, a mixture of human and mechanical technological beings – and they’re going to be billions and billions of them. Therefore, we ought to invest our wealth today, not in solving problems of world poverty but the money ought to be pushed in the direction of the intellectual and engineering elite to preserve these billions and billions of putative individuals that we’re going to create in the future’.

“And of course, that’s horrific.

“What started as an idea that sounded very altruistic, and actually runs under the name now of effective altruism, has become this longtermism where some people are actually saying: ‘Well look, don’t bother about the two-thirds of the world and the poverty, invest the money in the people that are going to be able to develop AI, develop new kinds of beings – there will be so much many more of those so that all the rest are expendable’.

“Well, that is absolutely horrific. And yet, is seriously being suggested.”

Émile Torres, a former longtermist who published an entire book six years ago in defence of the general idea, agrees.  She wrote in a 2021 essay:  “Longtermism might be one of the most influential ideologies that few people outside of elite universities and Silicon Valley have ever heard about. I believe this needs to change because … I have come to see this worldview as quite possibly the most dangerous secular belief system in the world today.”

The Expose Urgently Needs Your Help..

Subscribe now to make sure you receive the latest uncensored news in your inbox…


Can you please help power The Expose’s honest, reliable, powerful journalism for the years to come…

Your Government & Big Tech organisations
such as Google, Facebook, Twitter & PayPal
are trying to silence & shut down The Expose.

So we need your help to ensure
we can continue to bring you the
facts the mainstream refuse to…

We’re not funded by the Government
to publish lies & propaganda on their
behalf like the mainstream media.

Instead, we rely solely on our support. So
please support us in our efforts to bring you
honest, reliable, investigative journalism
today. It’s secure, quick and easy…

Just choose your preferred method
to show your support below support

Read the full article at the original website


Subscribe to The Article Feed

Don’t miss out on the latest articles. Sign up now to get access to the library of members-only articles.