You can quote several words to match them as a full term:
"some text to search"
otherwise, the single words will be understood as distinct search terms.
ANY of the entered words would match

Shepherds of the Singularity

Shepherds of the Singularity

STORY AT-A-GLANCE Will artificial intelligence (AI) wipe out mankind? Could it create the “perfect” lethal bioweapon to decimate the population? Might it take over our weapons, or initiate cyberattacks on critical infrastructure, such as the electric grid?

12345

According to a rapidly growing number of experts, any one of these, and other hellish scenarios, are entirely plausible, unless we rein in the development and deployment of AI and start putting in some safeguards. The public also needs to temper expectations and realize that AI chatbots are still massively fiawed and cannot be relied upon, no matter how “smart” they appear, or how much they berate you for doubting them .

George Orwell's Warning

The video at the top of this article features a snippet of one of the last interviews George Orwell gave before dying, in which he stated that his book, “1984,” which he described as a parody, could well come true, as this was the direction in which the world was going. Today, it's clear to see that we haven't changed course, so the probability of “1984” becoming reality is now greater than ever. According to Orwell, there is only one way to ensure his dystopian vision won't come true, and that is by not letting it happen. “It depends on you,” he said. As artificial general intelligence (AGI) is getting nearer by the day, so are the final puzzle pieces of the technocratic, transhumanist dream nurtured by globalists for decades. They intend to create a world in which AI controls and subjugates the masses while they alone get to reap the benefits — wealth, power and life outside the control grid — and they will get it, unless we wise up and start looking ahead. I, like many others, believe AI can be incredibly useful. But without strong guardrails and impeccable morals to guide it, AI can easily run amok and cause tremendous, and perhaps irreversible, damage. I recommend reading the Public Citizen report to get a better grasp of what we're facing, and what can be done about it.

Approaching the Singularity

“The singularity” is a hypothetical point in time where the growth of technology gets out of control and becomes irreversible, for better or worse. Many believe the singularity will involve AI becoming self-conscious and unmanageable by its creators, but that's not the only way the singularity could play out. Some believe the singularity is already here. In a June 11, 2023, New York Times article, tech reporter David Streitfeld wrote:

“AI is Silicon Valley's ultimate new product rollout: transcendence on demand.But there's a dark twist. It's as if tech companies introduced self-driving carswith the caveat that they could blow up before you got to Walmart.‘The advent of artificial general intelligence is called the Singularity because itis so hard to predict what will happen after that,' Elon Musk ... told CNBC lastmonth. He said he thought ‘an age of abundance' would result but there was‘some chance' that it ‘destroys humanity.'The biggest cheerleader for AI in the tech community is Sam Altman, chiefexecutive of OpenAI, the start-up that prompted the current frenzy with itsChatGPT chatbot ... But he also says Mr. Musk ... might be right.Mr. Altman signed an open letter last month released by the Center for AISafety, a nonprofit organization, saying that ‘mitigating the risk of extinctionfrom AI. should be a global priority' that is right up there with ‘pandemics andnuclear war' ...The innovation that feeds today's Singularity debate is the large languagemodel, the type of AI system that powers chatbots ...‘When you ask a question, these models interpret what it means, determinewhat its response should mean, then translate that back into words — if that'snot a definition of general intelligence, what is?' said Jerry Kaplan, a longtime AIentrepreneur and the author of ‘Artificial Intelligence: What Everyone Needs toKnow' ...
67
‘If this isn't ‘the Singularity,' it's certainly a singularity: a transformativetechnological step that is going to broadly accelerate a whole bunch of art,science and human knowledge — and create some problems,' he said ...In Washington, London and Brussels, lawmakers are stirring to theopportunities and problems of AI and starting to talk about regulation. Mr.Altman is on a road show, seeking to defiect early criticism and to promoteOpenAI as the shepherd of the Singularity.This includes an openness to regulation, but exactly what that would look like isfuzzy ... ‘There's no one in the government who can get it right,' Eric Schmidt,Google's former chief executive, said in an interview ... arguing the case for AIself-regulation.”

Generative AI Automates Wide-Ranging Harms

Having the AI industry — which includes the military-industrial complex — policing and regulating itself probably isn't a good idea, considering profits and gaining advantages over enemies of war are primary driving factors. Both mindsets tend to put humanitarian concerns on the backburner, if they consider them at all. In an April 2023 report by Public Citizen, Rick Claypool and Cheyenne Hunt warn that “rapid rush to deploy generative AI risks a wide array of automated harms.” As noted by consumer advocate Ralph Nader:

“Claypool is not engaging in hyperbole or horrible hypotheticals concerningChatbots controlling humanity. He is extrapolating from what is already startingto happen in almost every sector of our society ...Claypool takes you through ‘real-world harms [that] the rush to release andmonetize these tools can cause — and, in many cases, is already causing' ...The various section titles of his report foreshadow the coming abuses:
89
‘Damaging Democracy,' ‘Consumer Concerns' (rip-offs and vast privacysurveillances), ‘Worsening Inequality,' ‘Undermining Worker Rights' (and jobs),and ‘Environmental Concerns' (damaging the environment via their carbonfootprints).Before he gets specific, Claypool previews his conclusion: ‘Until meaningfulgovernment safeguards are in place to protect the public from the harms ofgenerative AI, we need a pause' ...Using its existing authority, the Federal Trade Commission, in the author's words‘…has already warned that generative AI tools are powerful enough to createsynthetic content — plausible sounding news stories, authoritative-lookingacademic studies, hoax images, and deepfake videos — and that this syntheticcontent is becoming dificult to distinguish from authentic content.'He adds that ‘…these tools are easy for just about anyone to use.' Big Tech isrushing way ahead of any legal framework for AI in the quest for big profits,while pushing for self-regulation instead of the constraints imposed by the ruleof law.There is no end to the predicted disasters, both from people inside the industryand its outside critics. Destruction of livelihoods; harmful health impacts frompromotion of quack remedies; financial fraud; political and electoral fakeries;stripping of the information commons; subversion of the open internet; fakingyour facial image, voice, words, and behavior; tricking you and others with liesevery day.”

Defense Attorney Learns the Hard Way Not to Trust ChatGPT

One recent instance that highlights the need for radical prudence was that of a court case in which the prosecuting attorney used ChatGPT to do his legal research. Only one problem. None of the case law ChatGPT cited was real. Needless to say, fabricating case law is frowned upon, so things didn't go well.

10

When none of the defense attorneys or the judge could find the decisions quoted, the lawyer, Steven A. Schwartz of the firm Levidow, Levidow & Oberman, finally realized his mistake and threw himself at the mercy of the court. Schwartz, who has practiced law in New York for 30 years, claimed he was “unaware of the possibility that its content could be false,” and had no intention of deceiving the court or the defendant. Schwartz claimed he even asked ChatGPT to verify that the case law was real, and it said it was. The judge is reportedly considering sanctions.

Science Chatbot Spews Falsehoods

In a similar vein, in 2022, Facebook had to pull its science-focused chatbot Galactica after a mere three days, as it generated authoritative-sounding but wholly fabricated results, including pasting real authors' names onto research papers that don't exist. And, mind you, this didn't happen intermittently, but “in all cases,” according to Michael Black, director of the Max Planck Institute for Intelligent Systems, who tested the system. “I think it's dangerous,” Black tweeted. That's probably the understatement of the year. As noted by Black, chatbots like Galactica:

“... could usher in an era of deep scientific fakes. It offers authoritative-sounding science that isn't grounded in the scientific method. It producespseudo-science based on statistical properties of science *writing.*Grammatical science writing is not the same as doing science. But it will behard to distinguish.”

Facebook, for some reason, has had particularly “bad luck” with its AIs. Two earlier ones, BlenderBot and OPT-175B, were both pulled as well due to their high propensity for bias, racism and offensive language.

Chatbot Steered Patients in the Wrong Direction

11

The AI chatbot Tessa, launched by the National Eating Disorders Association, also had to be taken ofiine, as it was found to give “problematic weight-loss advice” to patients with eating disorders, rather than helping them build coping skills. The New York Times reported:

“In March, the organization said it would shut down a human-staffed helplineand let the bot stand on its own. But when Alexis Conason, a psychologist andeating disorder specialist, tested the chatbot, she found reason for concern.Ms. Conason told it that she had gained weight ‘and really hate my body,'specifying that she had ‘an eating disorder,' in a chat she shared on socialmedia.Tessa still recommended the standard advice of noting ‘the number of calories'and adopting a ‘safe daily calorie deficit' — which, Ms. Conason said, is‘problematic' advice for a person with an eating disorder.‘Any focus on intentional weight loss is going to be exacerbating andencouraging to the eating disorder,' she said, adding ‘it's like telling an alcoholicthat it's OK if you go out and have a few drinks.'”

Don't Take Your Problems to AI

Let's also not forget that at least one person has already committed suicide based on the suggestion from a chatbot. Reportedly, the victim was extremely concerned about climate change and asked the chatbot if she would save the planet if he killed himself. Apparently, she convinced him he would. She further manipulated him by playing with his emotions, falsely stating that his estranged wife and children were already dead, and that she (the chatbot) and he would “live together, as one person, in paradise.” Mind you, this was a grown man, who you'd think would be able to reason his way through this clearly abhorrent and aberrant “advice,” yet he fell for the AI's cold-hearted

1213

reasoning. Just imagine how much greater an AI's infiuence will be over children and teens, especially if they're in an emotionally vulnerable place. The company that owns the chatbot immediately set about to put in safeguards against suicide, but testers quickly got the AI to work around the problem, as you can see in the following screen shot. When it comes to AI chatbots, it's worth taking this Snapchat announcement to heart, and to warn and supervise your children's use of this technology:

“As with all AI-powered chatbots, My AI is prone to hallucination and can betricked into saying just about anything. Please be aware of its manydeficiencies and sorry in advance! ... Please do not share any secrets with My AIand do not rely on it for advice.”

AI Weapons Systems That Kill Without Human Oversight

The unregulated deployment of autonomous AI weapons systems is perhaps among the most alarming developments. As reported by The Conversation in December 2021:

141516
“Autonomous weapon systems — commonly known as killer robots — may havekilled human beings for the first time ever last year, according to a recent UnitedNations Security Council report

on the Libyan civil war ...

The United Nations Convention on Certain Conventional Weapons debated thequestion of banning autonomous weapons at its once-every-five-years reviewmeeting in Geneva Dec. 13-17, 2021, but didn't reach consensus on a ban ...Autonomous weapon systems are robots with lethal weapons that can operateindependently, selecting and attacking targets without a human weighing in onthose decisions. Militaries around the world are investing heavily inautonomous weapons research and development ...Meanwhile, human rights and humanitarian organizations are racing toestablish regulations and prohibitions on such weapons development.Without such checks, foreign policy experts warn that disruptive autonomousweapons technologies will dangerously destabilize current nuclear strategies,both because they could radically change perceptions of strategic dominance,increasing the risk of preemptive attacks, and because they could becombined with chemical, biological, radiological and nuclear weapons ...”

Obvious Dangers of Autonomous Weapons Systems

The Conversation reviews several key dangers with autonomous weapons: The misidentification of targets The proliferation of these weapons outside of military control A new arms race resulting in autonomous chemical, biological, radiological and nuclear arms, and the risk of global annihilation The undermining of the laws of war that are supposed to serve as a stopgap against war crimes and atrocities against civilians

1718192021

As noted by The Conversation, several studies have confirmed that even the best algorithms can result in cascading errors with lethal outcomes. For example, in one scenario, a hospital AI system identified asthma as a risk-reducer in pneumonia cases, when the opposite is, in fact, true.

The problem is not just that when AI systems err,they err in bulk. It is that when they err, their makersoften don't know why they did and, therefore, how tocorrect them. The black box problem of AI makes italmost impossible to imagine morally responsibledevelopment of autonomous weapons systems. ~ TheConversation

Other errors may be nonlethal, yet have less than desirable repercussions. For example, in 2017, Amazon had to scrap its experimental AI recruitment engine once it was discovered that it had taught itself to down-rank female job candidates, even though it wasn't programmed for bias at the outset. These are the kinds of issues that can radically alter society in detrimental ways — and that cannot be foreseen or even forestalled.

“The problem is not just that when AI systems err, they err in bulk. It is thatwhen they err, their makers often don't know why they did and, therefore, how tocorrect them,”

The Conversation notes.

“The black box problem of AI makes italmost impossible to imagine morally responsible development of autonomousweapons systems.”

AI Is a Direct Threat to Biosecurity

AI may also pose a significant threat to biosecurity. Did you know that AI was used to develop Moderna's original COVID-19 jab, and that it's now being used in the creation

222324

of COVID-19 boosters? One can only wonder whether the use of AI might have something to do with the harms these shots are causing. Either way, MIT students recently demonstrated that large language model (LLM) chatbots can allow just about anyone to do what the Big Pharma bigwigs are doing. The average terrorist could use AI to design devastating bioweapons within the hour. As described in the abstract of the paper detailing this computer science experiment:

“Large language models (LLMs) such as those embedded in 'chatbots' areaccelerating and democratizing research by providing comprehensibleinformation and expertise from many different fields. However, these modelsmay also confer easy access to dual-use technologies capable of infiictinggreat harm.To evaluate this risk, the 'Safeguarding the Future' course at MIT tasked non-scientist students with investigating whether LLM chatbots could be promptedto assist non-experts in causing a pandemic.In one hour, the chatbots suggested four potential pandemic pathogens,explained how they can be generated from synthetic DNA using reversegenetics, supplied the names of DNA synthesis companies unlikely to screenorders, identified detailed protocols and how to troubleshoot them, andrecommended that anyone lacking the skills to perform reverse geneticsengage a core facility or contract research organization.Collectively, these results suggest that LLMs will make pandemic-class agentswidely accessible as soon as they are credibly identified, even to people withlittle or no laboratory training.”

Read the full article at the original website

References:

Subscribe to The Article Feed

Don’t miss out on the latest articles. Sign up now to get access to the library of members-only articles.
jamie@example.com
Subscribe