Are the Tech Bros Insane?
Entertain for a moment the idea that these Tech Bros are neither evil villains nor saviors. Instead, consider that they may actually believe AI technocracy would be good, but they are unable to understand the full implications of their goals because they suffer from right hemisphere deficits.

While I am thrilled that the Tech Bros are tearing down the old system, I’m worried they will replace inefficient centralized control and bureaucracy with more efficient AI centralized control and bureaucracy. Collapsing the old system of governance is something, not incidentally perhaps, the WEF has promoted as necessary for the 4th Industrial Revolution, the transhuman revolution.
Elon Musk (at the helm of Neuralink, Starlink), Larry Ellison (with Oracle), Peter Thiel and Alex Karp (who founded Palantir) have all expressed enthusiasm for merging biology with technology, figuring out how to live forever, micro-managing society using algorithms and AI surveillance, and other stupid things. They each currently have oversized roles in or adjacent to the US federal government.
Should we be concerned?
Oracle is setting up Stargate, a mega nuclear-powered data center for processing Big Data. Data on us?
Palantir has a contract with U.S. Army, fighting alleged terrorism in tandem with Amazon Web Services (which hosts the CIA’s, the NSA’s data on citizens). They offer Large Language Model (LLM) technology to the US Department of Defense to deploy AI weaponry.
If Palantir were to turn its eye from the people onto the government, that would be a good thing. If the Stargate project were to be used to track all federal spending and make everything transparent to citizens at all times, I would be pleasantly surprised. But I suspect that Palantir and Stargate will be used to try to manage the decisions of warfare and the welfare of the country.
The problem with this is that LLMs are glorified predictive text engines: matching prompts to patterns in a database, they output the type of pattern that usually follows. The system itself is not designed to be factual, only probable: it is stereotyping on steroids.
If you thought human bureaucracy is often idiotic and frustrating, you haven’t felt AI bureaucracy yet.
Is a transhuman technocracy what they have in mind? Let’s consider some of the pronouncements of the fab four.
Musk says he is promoting the Neuralink brain computer interface (BCI) in order to “mitigate the risk of digital super-intelligence,” i.e., he is afraid that computers will become more intelligent than humans. So, for roughly a billion of us, asap he wants to “improve our bandwidth to our digital tertiary self,” i.e., he wants to increase the download speed of “bits” into and out of our brains. He says that talking and typing (indeed language) is too slow and he imagines he can bypass our language interface (speech and writing) with his implant.
There are very serious problems with his conception of human intelligence. Clearly high intelligence is not due to “rapid bit transfer”: a brain uses signals in a much more structured way than a computer does, which is why we expend only about a light bulb’s worth of energy to perform tasks that a computer does, albeit much faster, while burning huge amounts of energy.
Thinking and learning is an iterative process involving dynamic brain waves that are emergent from complex adaptive neurons. Waves are not bits. A memory (knowledge) is a type of neural behavior that is never quite the same twice and can’t be abstracted and uploaded into a brain. I’ve studied the infamous paper that claims uploading false memories is possible: but, in fact, they can only condition past memories, that is, associate feelings (hormonal responses) with past memories.

Neuralink can’t implant ideas and doesn’t read minds. Neuralink merely interfaces with motor control areas of the brain, not with memories or thoughts or language. So, the only “bits” going from brain to computer are those that can be used to activate a mouse, click, and swipe. Musk confuses motor impulses with thoughts, and he believes Neuralink will some day be able to help us achieve “symbiosis” with computers and “better align Artificial Intelligence with human collective will.” He does not seem to notice the difference between playing a video game faster and acting more intelligently in the world.

Musk cannot fathom how the brain works because he thinks of it in terms of a computer. As a philosopher of science, I study the way cells (like neurons) use signals. I promise, we are not machines and the brain is not like a computer. I am not afraid that AI robots are going to replace humans (only for the shit jobs). I don’t think Musk and his Tech Bros will succeed at what they are trying to do. AI mind-reading is not on the horizon. Superhuman cyborgs aren’t possible. Re-engineering the genome will result in innumerable side-effects. A nation cannot be managed by computer algorithms.
But I do worry that they might do a lot of harm trying.
Let’s look at Musk’s tech allies.
In late 2021, assuming that respectability is tantamount to getting lots of likes on a post, Alex Karp claimed the US military is the “most legitimate institution” in the county because it is at 76% popularity. Then he revealed that
Every single vaccine in this country is distributed in our software, but the real heavy work happened from engineers from the US military…the engineering work done on our platform was done by people who are in the US military…this is how we’re protecting you.
And his business partner Peter Thiel confessed,
I no longer believe that freedom and democracy are compatible.
And we also heard this from Larry Ellison,
Citizens will be on their best behavior because we’re constantly recording and reporting everything that’s going on.
A Different Take from Team Woke and Team Musk
I offer an alternative to the theories of Team Woke or Team Musk. Entertain for a moment the idea that these Tech Bros are neither evil villains nor saviors. Instead, consider that they may actually believe AI technocracy would be good, but they are unable to understand the full implications of their goals because they suffer from right hemisphere deficits.
As I have been relentlessly arguing on this stack, transhumanists seem to have unrealistic optimism, to the point of absurdity, about their schemes to micro-manage the whole world and to “enhance” humans by merging bodies with computers and/or manipulating biological “codes.”
The same failed 19th century scientific positivism, which backed eugenics, now seems to back its successor. Sometimes I think the transhumanists aims might be nefarious; sometimes I think they are just simplistic. Today, I pose the possibility that their mechanistic reductionism may be symptomatic of mental illness.
Positivism and Social “Science”
Hierarchical organization, bureaucracy, top-down control, centralization—are all based on the belief that the correct solution to a societal problem can be discovered via a logical system and implemented across the board.
In contrast, intelligent nature is decentralized, more interdependent than hierarchical, and solves problems within particular contexts. Although the human ability to think abstractly has resulted in our ability to use symbolic language and to engineer amazing technologies, our abstractions need to be worked out in real world contexts. Without such grounding, symbolic thinking can spiral out of reality.
Positivistic social science (enamored of Newton’s way of finding algorithms to predict and control) was dismissed by subsequent findings in complex systems science, which was developed in the 1990s, when we finally put to rest the idea that intelligence is organized by a centralized executive. (Indeed the name Central Intelligence Agency is an oxymoron.) When I was doing research at the Santa Fe Institute in the early naughts, it seemed the new science would finally lead to a completely new culture, free from relentless top-down control.
But positivism and mechanistic reductionism keep showing up again like a bad penny.
Today many of our institutions—educational, medical, informational, political—have reverted to this disgraced scientific philosophy that endorses hierarchical control systems that are not grounded in reality. Now they seem to be stuck in a self-reinforcing positive feedback loop which resembles mental illness.
Both eugenics and transhumanism aim to replace traditional/religious morals and norms with “science-based” strategies for organizing society. Augusta Comte (1798 – 1857), the founder of positivism, was the first social “scientist.” Comte’s modern-day equivalent might be someone like Yuval Noah Harari or Elon Musk.
Positivism assumes that the world can be analyzed, that the individual parts of a whole can be removed from context and objectively and fully described, defined and fixed. Once quantified in this way, the data can be fed into a computer and used to make predictions about future interactions.
A non-mechanistic worldview holds that the relationships between things affect how they function and therefore what they are. And that’s always changing.
If you have wondered why Musk is obsessed with the letter X, the 19th century X Club, whose members included the infamous T.H. Huxley and Herbert Spencer, sought to replace cultural leaders with scientists, who would manage society better than clergymen or politicians. Elon Musk’s maternal grandfather Joshua Hardeman promoted a similar effort in the 1930s known as Technocracy Inc, about which Patrick Wood has written extensively. Musk has said he wants to turn X into a payment system. Will it include a Black Mirror worthy social credit scoring system that rations essentials and nudges users to do the “right” thing as defined by the Wizard of AI?
Iain McGilchrist on Left Hemisphere Dominance
As a philosopher of science, I critique reductionism as insufficient for understanding living systems. My work in biological signal processing (aka Biosemiotics) has been an effort to ground signs in contexts and understand how the relationships between qualities in living systems can affect outcomes.
I did not realize until recently that what I’ve been saying all these years is consistent with what researchers have found with the different emphases of the left and right hemispheres.
Iain McGilchrist argues that the positivistic bent can be associated with left hemisphere dominance and right hemisphere deficits. In The Master and his Emissary, he notes that extreme left brain dominance is characterized by the
inability to tell what another is thinking, a lack of social intelligence, difficulty in judging non-verbal features of communication, such as tone, humor, irony, and inability to detect deceit, and difficulty understanding implicit meaning, lack of empathy, a lack of imagination, an attraction to the mechanical, a tendency to treat people and body parts as inanimate objects.
McGilchrist has argued that the prevalence of mechanistic thinking in today’s western culture, the over-valuation of artificial intelligence and so-called “evidence-based” science (which usually means “computer models”), is due to insufficient grounding of left hemisphere concepts in right hemisphere experience. McGilchrist tends to critique western culture in favor of eastern culture, which he argues is more balanced in terms of relying on both hemispheres. The right hemisphere is comfortable with wholeness, harmony, and cooperation.
I admit that I might be, as a westerner, more comfortable with a strong left hemisphere. It seems to me that right brain dominance could lead to too much homogeneity in a society—with everybody wanting to fit it. Eastern societies do tend to conform more than western societies. I confess I admire the infamous individualistic spirit of the American Maverick. That’s probably why Musk is likable to many Americans.
But I don’t like reductionism, and it seems our society, and Musk, is promoting it.
The left hemisphere processes abstract knowledge, symbols, familiar things. The right processes contextual experience and unfamiliar things. McGilchrist mentions that, although the tasks befitting left and right hemisphere are usually delegated appropriately, whichever hemisphere happens to handle the problem first is more likely to continue with it. So there is a possibility of feedback reinforcing the wrong hemisphere for a task.
I fear that modern elementary education is actually increasing left hemisphere bias. For example, children are taught the rules of grammar, and the labels for different parts of speech, when they would be better off learning language by using it in context. Children are sometimes taught the names of things, before experiencing them. I know that I develop a feeling for the meaning of a new word by encountering it in context. Only much later do I look it up in the dictionary.
In American schools, children are often instructed to start writing an essay by constructing an outline first. I say, they should start writing by bringing the topic up at the dinner table, later jotting some notes down. Only after exploring the topic like this should they begin to engage left hemisphere tactics of pulling out the concepts and applying logical organization.
As much as McGilchrist’s theory sounds right to me, I am hesitant to pathologize mechanistic thinking per se. We all have mechanistic tendencies and they serve us very well when they are grounded by right hemisphere tendencies which contextualize the “facts” of the left hemisphere. Both are needed: the right deals in individual particulars and emergent wholes and the left deals with generalizations, categories, and statistical averages. In society, we may consider it an advantage that some of us tend one way and others tend another, and together we are more robust and creative.
But do we want left-hemisphere dominant people running the show? One of their worst faults, according to McGilchrist, is that they cannot see the error of their own thinking. When they make a mistake, they don’t correct their model: instead, they double down.
The Military and NASA
The military is the biggest promoter of transhuamnist research, even though the enhancements promised by the military have not arrived after 20+ years of effort.

In their four-part series, Lissa Johnson, Daniel Broudy and David A. Hughes note that transhuman research flourishes in the military and at NASA where the expense for creating technological enhancements might be justified because of the extreme conditions that people in these roles encounter.
I read through the many documents cited. So far, all the great cyborg innovations the military and NASA talk about lie in the future. They have been pursuing transhuman tech for a quarter of a century, with nothing to show, no real enhancements melding man and machine. But they keep insisting that success is just around the corner. These researchers have no humility; they lack self-awareness. They don’t seem to notice that they haven’t delivered on any promises.
As Johnson et al. point out, these military-funded researchers refer to soldiers as tools, as machines, as weaponry. They use the gender neutral pronoun “it” to refer to human beings. This might corroborate McGilchrist’s hypothesis that left-brain dominate people tend to treat human beings as inanimate objects; this they have in common with people suffering from schizophrenia, which is due to damage in the right hemisphere.
Transhumanism needs Human Sacrifice

I believe it’s important to acknowledge that, contrary to what their marketing says, transhumanists are not looking to enhance all human bodies. They are looking for volunteers for research so that enhancements can be developed for elites.
Researcher Steve Fuller is the Auguste Comte Chair in Social Epistemology at Warwick University in the UK. He says the quiet part out loud:
A Modest Proposal for Suicide as a Facilitator of Transhumanism
…as long as research ethics codes for human subjects continue to dwell in the shadow of the Nuremberg Trials, a very high bar will be set on what counts as ‘informed consent’. Nowadays, more than seventy years after the defeat of Nazi in Germany, the only obvious reason for such a high bar is the insurance premiums that universities and other research institutes would need to bear if they liberalized the terms on which subjects could offer themselves in service of risky enhancement research.
When I first read the title of this little opinion piece, I thought it was ironic, referencing Jonathan Swift’s famous essay pretending to argue that the Irish should eat their babies to avoid starving and reduce the population at the same time. Maybe Fuller didn’t notice that Swift was joking, making fun of “The Moderns,” the social scientists of his day. Fuller suggests that we could look at volunteerism for risky research “in the spirit of self-sacrifice,”
not so very different from citizens who volunteer to join military service, knowing full well that they may need to give up their life at some point.… there is something of value in people willingly risking their lives in war – a sense of self-transcendence -- which nevertheless [needs] to be channeled in a more productive fashion. My modest proposal is that the taboos on suicide be lifted such that potential experimental subjects, who are told that their chances of survival are very uncertain, may nevertheless agree to participate with limited liability borne by the institution conducting the research.
Such people have the makings of becoming the true of heroes of the transhumanist movement.
By “self-transcendence” Fuller means “individuals don’t matter.” This is certainly not what the New England transcendentalists had in mind. In Fuller, we see how the logic of the greater good ideology plays out. Statisticians don’t care about individuals, whose realities can just be averaged out.
The Palantir-enabled military COVID-19 vaccine roll-out, if viewed in this light, begins to make more sense. Was one of the objectives to do a massive experiment with modified RNA and the polyethylene glycol delivery system? Why were there so many different batches with varying ingredients and concentrations of mRNA? Very poor quality control? Maybe. But if you were conducting an experiment, that is what you would do. Vary the batches and observe what happens.
Some researchers probably really were hopeful, I think, that this new biotech gene therapy would be successful and could be used for a variety of treatments and enhancements. Even though there were rather high percentages of adverse effects from the vaccine — somewhere around 20-30% — some researchers have doubled down seeking to use the technology for so-called depression vaccines, obesity vaccines, cancer vaccines and so forth. Perhaps they are unrealistically optimistic about their strategies and cannot see their own errors.
Or it might be that those in power are okay with experimenting on the public, without our consent, for some greater “good”—the eventual development of superhuman enhancements, eternal life, the elimination of infectious disease or genetic defects.
Conclusions
The Covid-19 vaccine roll-out was a transhumanist military operation. I’ve been slow to appreciate this fully. Reading the articles by Johnson et al. brought it home. I’d blamed the greedy pharmaceutical companies and investors like Bill Gates. But now I’m beginning to see more and more than it’s not just greed, there is a perverse ideology that runs throughout our culture and government that has allowed us to get to where we are.
We cannot have the military driving scientific research. We do not want positivistic social scientists managing the citizenry or altering human bodies. If the Tech Bros stick to auditing the government, we might survive.
Read the full article at the original website.