You can quote several words to match them as a full term:
"some text to search"
otherwise, the single words will be understood as distinct search terms.
ANY of the entered words would match

Neuralink Does Not Read Minds and Never Will

Neuralink Does Not Read Minds and Never Will

The headlines scream:

“AI Mind-Reading Has Arrived!”

“Neuralink Allows Man to Control Computer with his Thoughts!

“Chinese Soldiers’ AI Implants Enhance Abilities and Reaction Times.”

Every neural implant, whether made by Elon Musk’s company or other research institutions, is capable of picking up electrical pulses for motor control.  These devices do not decode thoughts.

Thoughts about objects, memories, beliefs, and intentions are complex reverberating relationships between multiple simultaneous processes in multiple regions of the brain. Thought is not a “code” — a linear sequence of blips— located in a specific area of the brain.

In January of this year, the folks at Neuralink implanted a Fitbit-like device into the brain of the first human experimental subject. The device has 64 threads that reach deep into the motor cortex tissue, with some 3,000 electrodes that pick up electrical discharges that occur when a person tries to move his body. The decision to move, the will to move, and the motivation for moving are more complex processes that occur before the firing of motor neurons.

The researchers who hype “mind reading” devices may be so narrowly trained in their mechanistic fields that they don’t realize the device is not reading minds and never will — or maybe they do understand this, but they want to be able to control people’s motor impulses with electrical current.

Like Galvani’s dead frogs.

In this essay, I describe three different neural implants that are being trialed on paralyzed people, even though safer communication devices are available that might function as well, if not better. The patients themselves seem to understand that the implanted devices are limited, but they hope that their sacrifice will one day enable great advances in technology for the benefit of others.

After I describe how the implants are working for the these subjects, I will try to explore why our culture is so stuck on the idea that a machine could ever detect what we’re thinking. It may be, as Iain McGilchrist has noted in his 2009 book, The Master and His Emissary: The Divided Brain and the Making of the Western World, that the left-hemispheres of our brains, which think people are machines, have taken over.

Patient 1: Ann

In 2023, Ann had a brain-computer-interface (BCI) device implanted at the University of California San Francisco Weill Institute for Neurosciences. Ann’s arms and legs are paralyzed, and she is unable to speak. But she can make facial expressions. When she moves her mouth as if she were speaking, the implant can pick up pulses in her motor cortex going to her facial muscles.

The pulse patterns picked up by the neural implant are fed into a computer, a so-called “neural” network, that categorizes and identifies the pulses associated with specific facial movements for different phonemes. To train the AI, Ann had to repeat different sounds over and over again for weeks until the computer recognized the brain activity patterns associated with all the basic sounds of speech. The researchers claim the computer only had to learn 39 phonemes (vowel and consonant combinations) to be able to identify any word in English. She now has a 1,024-word vocabulary that she can use with this device.

An AI avatar onscreen that resembles Ann, speaking through a voice synthesizer, says the words that Ann mouths.

I wonder why Ann is not using the sophisticated AI software that has been developed for lip reading, since she can mouth words.  With the lip-reading program, and a camera pointed at her face instead of an implant in her brain, she could probably easily exceed a 1,024 word vocabulary.

Patient 2: Bravo1

The second patient in his 40s is known as Bravo1. He cannot move his facial muscles like Ann can. AI-assisted lip reading isn’t an option. In 2021, researchers at UC San Franscisco implanted a device that detects pulses sent to his vocal cords. The system is able to detect up to 18 words per minute with 75-93%t accuracy, when implementing an “auto-correct” function. Because the various patterns of vocal cord activation are difficult to distinguish — even for AI pattern recognition software — the system, together with predictive text, gets him about 50 words to work with.

It must be stressed that the AI used by Ann and Bravo1’s systems cannot relate any electrical patterns to speech patterns, without extensive training and cooperation from the patient.

These implants will never be out-of-the-box devices that can decrypt the pulses sent to vocals cords or facial muscles to determine which words are intended. The person whose brain activity is being measured has to train the AI.

For example, Bravo1 had to try to say the word “water” over and over, while the AI recorded that pattern and then made a generalized model of the pattern, which is slightly different every time. He had to do this with every one of the 50 words the program can now identify.

I note that this man can blink.  It seems to me he could learn to do Morse Code. Again a camera could be pointed at his face — and with AI helping to predict the next letter and next word — he would be able to communicate in Morse Code much more efficiently and much more safely — without undergoing brain surgery and without having to tolerate a device that as some point may cause dangerous inflammation.

Patient 3: Nolan

Neuralink’s first human test subject is 29-year-old Nolan, who received an implant that, unlike those implanted in Ann and Bravo1, cannot be fully removed.  The threads of the motor signal detectors are so fine that they work their ways into the brain tissue.

Unlike Ann and Bravo1, Nolan can talk. He can also move his head and shoulders.  He had the option of using a voice-activated computer. He also could have gotten a device that allowed him to move his head like a joy stick to control a cursor.

Stephen Hawking typed on a keyboard by twitching his cheek muscles; he had no implant.

As with the other patients, Nolan’s implant detects neural pulses that control movement.  Nolan has to try to move his hand, as he would to control a computer mouse, and those pulses are picked up by the implant and wirelessly sent to a computer that categorizes them, and, after training, moves the mouse accordingly.

The Neuralink engineer in the video, whose name is Bliss, jokes that Nolan has telekinetic powers.  Most of comments below the video repeat such claims.

I don’t know if Nolan is able to move the mouse without consciously trying.  Like walking, moving a mouse is one of those skills that you want to be able to do unconsciously.

In the next phase of the research, the Neuralink team wants to implant a second device to stimulate the muscles, the two devices acting as a bridge over the damaged area of Nolan’s spinal cord.  Such technology, coupled with an exoskeleton perhaps, might really improve Nolan’s quality of life. I hope he walks someday as a result of this experiment. I don’t hope that his thoughts will ever be read by any computer.

When Bliss asked Nolan what he has been able to do with his new powers, he replied that he has been able to to play video games until 6 in the morning.

I think Nolan could use one of Telsa’s voice-activated robots as a personal assistant.  Maybe Musk can be persuaded to throw that into the deal for Nolan.

Is this just the beginning for AI mind-reading technology?  Or do we already see that’s not where this is going because none of these implants pick up thoughts per se? They pick up motor impulses.

Brain Surgery Could Help You Click and Swipe Faster

Elon Musks says that, in the near future, able-bodied people are going to want a Neuralink implant, so that they can interact directly with a computer, the whole Internet, and even AI.

Hold on a minute.  What is he actually saying?  Are Neurolinked people going to merge with AI and comprehend all the data on Google’s servers with their mind’s eye, as implied by this illustration here?

Actually, Musk speculates that Neuralinked people will be able to click and swipe faster.

It’s not as if AI is going to be injected into the neuronal DNA. Nueralinked people are still going to be using external computers and screens.

If you get a Neuralink, you would just be replacing your hand — an interface tool that has been perfected by billions of years of evolution — with a bluetooth connection to a Fitbit-like device that may or may not work all that well.

Who would want that? Professional video game players?

The Left Brain as Symbol Manipulator vs. the Right Brain as Thinker

In his work on how the left and right brain hemispheres function and interact, Iain McGilchrist does not try to describe the vastly complicated chemistry underlying brain wave activity. Indeed, researchers like McGilchrist depend mainly on observing the behavior of brain-damaged people to understand how the brain works.  If there is damage to one of the hemispheres, predictable neurological deficits will result.

But overall what becomes clear reading McGilchrist is the extent to which thinking and doing, believing and remembering are vastly complex processes distributed throughout the brain’s various regions, which depend upon each other to create meaning.

I lead a monthly webinar called “We Are Not Machines,” critiquing those who think Artificial Intelligence is actually intelligent, and I try to show how biological processes are much more complex than computer processses. I might tell my students to listen to McGilchrist and just be done with webinar. He makes it clear that it is delusional to think that thoughts can be decoded by putting a few thousand probes in someone’s brain.

According to McGilchrist, the left hemisphere is mechanistic.  It is involved in tool use, and it treats objects in the world as inanimate and decontextualized.  It is involved in producing speech like one would manipulate a tool, using predefined procedures with predictable results.

The right hemisphere provides contextualization for words, that is, the meaning for the words.

Different Kinds of Signs: Symbols, Icons and Indexes

Per my field, which is Biosemiotics, I’d say that the right hemisphere seems to be more involved with what we call grounded signs, icons and indexes. Thinking and acting intelligently is something all living beings, including microbes and individual cells, can do. And they seem to be able to do this using grounded signs.

The icon, as a sign, associates something with some other thing by virtue of a physical similarity.  For example, if I want to represent a cat I might imitate one, saying, “meow, meow,” and you would get my meaning because my meow sounds similar to the sound a cat makes. Within a body’s cells, an icon sign can be a molecule that fits into a receptor because of its similar shape.  Physical similarity makes for an association. This is how things can become signs of other things (or outcomes), due to contextualized relationships.

An index associates something to some other thing (or outcome) by virtue of a physical vector. An infant can communicate her desires by pointing with her index finger. You see that the infant is directed toward the object. For a biological example, we can consider how slime mold starts pulsing rapidly in a specific direction, which causes it to move toward a detected food gradient.

These kinds of signs get their meaning from the context and don’t have to be learned.

In contrast, another kind of sign, called a symbol or a code, has to be learned because it’s not grounded in physical relationships. For example, the word “cat” arbitrarily refers to the animal that says meow.

Thus adding to McGilchrist’s argument, I would say that the so-called “language” of the left hemisphere does not use icons or indexes, whose meanings are grounded in context. The left hemisphere seems to use symbol manipulation exclusively.

As noted, a symbol, as a type of sign, represents some thing by convention, that is, a mark, sound, or pattern is arbitrarily associated with some other thing.  For example, in Morse Code dashes and dots arbitrarily signify sounds or numbers.

Computer designers do not have any concept of icons or indexes or any kind of grounded signs. That’s why computers have to be programmed, directly by a programmer or indirectly by trial and error training.

Computers do not use icon and index signs.  Like left hemispheres, computers are strictly involved in symbol manipulations.  1s and 0s are symbols forming patterns that represent other kinds of symbols, words, and numbers.

To the extent that AI can imitate human intelligence, it seems capable only of imitating the left hemisphere of the brain, the part that doesn’t really do much of the thinking.

The Left Hemisphere Can Hallucinate

Although no signs are contextualized in a computer, as with icons and indexes in living organisms, computers can detect statistical similarities in patterns of 1s and 0s.  That is how a computer seems to generalize based on similarities for, say, spell check.  Computers can also detect the frequencies of different patterns appearing together, and that’s how they are able to predict that the word “chicken” will more likely follow “barbecued” than “cat.”  But this kind of faux contextualization must be based on massive data that provides the guiding probabilities.  This system works like a rigged roulette wheel.

A Large Language Model (LLM) AI, such as Chat-GPT or Gemini or Bard, can assert that A and B are associated with each other, based on an incorrect identification of similarities or frequent pairings.  This must be the source of what is known as LLM’s tendency to “hallucinate.”

Brain damaged patients, whose left hemispheres dominate, are given to hallucinations as well.

Do We Want the Left Hemisphere to Be in Charge?

McGilchrist has noted that computer-generated responses mimic left-brain speech production.

He has also argued that, more and more, our society seems to be run by those whose left hemisphere are more dominant than those whose right hemisphere are more influential in their thinking processes.

The left brain is bureaucratic, mechanistic. It gets stuck in ruts and depends upon the right brain to help it change course.  People with injuries to the right hemisphere, depending solely on left, will stick to a path even if it is blatantly incorrect.

The positivistic left hemisphere acts as if it already has all the correct answers to solve any problems. Left hemisphere dominance leads people to put faith in institutional leaders to implement the one-size-fits-all programs that will work. McGilchrist says that when things don’t work as expected, the left hemisphere doesn’t consider the possibility that its solution might simply be wrong, instead it assumes it needs to do even more more of the same.  Double down.

Sound familiar?

VN Alexander PhD is a philosopher of science and novelist, who has just completed a new satirical novel, C0VlD-1984, The Musical.

SUPPORT OFFGUARDIAN

If you enjoy OffG's content, please help us make our monthly fund-raising goal and keep the site alive.

For other ways to donate, including direct-transfer bank details click HERE.

Read the full article at the original website

References:

Subscribe to The Article Feed

Don’t miss out on the latest articles. Sign up now to get access to the library of members-only articles.
jamie@example.com
Subscribe