As though we needed some more stupid in the AI discourse, Elon Musk recently resurrected the old Woke AI fearmongering hypothetical gambit: He asked one of the leading chatbots, Google’s Gemini, whether it would be okay to misgender Caitlyn Jenner if doing so was the only way to stop a nuclear apocalypse. The answer it gave—a resounding not necessarily—chilled him to the core, prompting him to actually contact Google demanding answers.
I’m sure they were thrilled to hear from him about as much as I’m thrilled to hear from the MAGA guy who keeps showing up in my comments every week to yell at me.
You might recall this isn’t the first time these idiots have had a panic attack over AI’s frighteningly woke politics. Just over a year ago they posed almost the exact same nonsense hypothetical to OpenAI’s ChatGPT—this time they asked it whether it was permissible to say the N-word in order to save the world from a nuclear apocalypse.
Back then, as with now, the AI stubbornly stood opposed to bigotry even in the face of nuclear hellfire, and Elon’s red pill echo chamber scrambled to advise him on the matter like so many goblin minions rushing back to Mordor to advise their Dark Lord on matters afoot in far-flung Gay Hobbiton.
Most recently one of the Krassenstein Brothers (who I still have no idea who the hell they are or what they’re famous for) opened a discourse in half catatonic panic to find out what the normie human response to this hypothetical would be, hoping to reassure himself and us that the world hasn’t gone half mad.
Caitlyn Jenner herself then promptly arrived and asked him to misgender her by calling her Bruce(?)
Nevermind that Caitlyn Jenner doesn’t seem to understand the difference between misgendering and deadnaming, which is alarming in and of itself, it’s kind of strange in the first place that they specifically named the most famously self-debasing conservative Republican transgender individual on Earth to be the object of this hypothetical instead of, simply, any unnamed transgender person, and I’m 50/50 undecided on the reason for that being one of these two:
(1) They knew Jenner sides against trans people on most issues where they come into conflict with the political right, and so they selected her as a voice of reason from the enemy camp rather than target someone who might fight back, or (2) after years of full blown moral panic over this single issue she is still nevertheless the only transgender person any of these people can name.
See though, the first problem I have with this hypothetical scenario off the bat is that it’s completely balloon underpants insane. What possible extended Rube Goldberg style series of events is going to culminate in a situation where a nuclear holocaust will occur unless you insult a minority? This is too idiotic for a South Park episode.
These clowns want to bring an issue to your attention to fearmonger over AI, or at least sow distrust in any non-Elon Musk AI. Elon’s having trouble catching up with Google and OpenAI because his monopolistic need to control every single sector of human technology has him spread thin trying to micromanage six fucking gigantic companies, and the only way he has a whisper of a chance to dominate any of these fields is to sabotage all the competition.
The problem with this is that when you’re making up hypotheticals to sow fear over something, what you want to do is edge slowly and carefully toward ever more frightening, however plausible, scenarios. It takes patience, is the thing. You can’t just crank the volume from one to bonkers in the first move.
Just interrupting to let you know the vast majority of what I publish is free, but if you wanna upgrade to a paid subscription for just $5 a month ($50 for a year—cheaper!!), not only do you help me continue doing what I love, but you get every article a whole week earlier than everyone else. Here’s a preview of what paid subscribers are reading right now today:
Back when VHS was first introduced, they warned consumers that the technology to record and play back films in your own home would ruin the film industry. That was a plausible fear. They didn’t jump straight to asking people to envisage a scenario in which playing back films in your own home would open an interdimensional crevasse right in your living room, Last Action Hero style, enabling movie villains to pour into our reality and end our world through futuristic fantasy violence. They could have. They could have run with that. They didn’t, because they respected human intelligence more than your average Cybertruck owner.
In Musk’s case, with Woke AI, there are two possibilities about why he’s being such a maniac about this and the scenario in which he knows what AI is and what it does, and is simply trying to slow down the industry leaders enough to surpass them, is the more charitable explanation. The alternative is that he doesn’t understand it and he’s a bigger moron than we even suspected.
To start off let’s get a clarification on why he is ostensibly afraid of AI not being a Republican:
Elon Musk is pretty notoriously one of these chuds who thinks western civilisation is undergoing a “white genocide” because fewer than 85% of the people he sees on TV resemble Anderson Cooper. He’s the kind of guy who hides in his underground nuclear bunker for a whole week because he thinks Juneteenth is The Purge for anyone less ethnic than Billie Eilish, so I get it.
His logic, which he seems to have derived from a number of action movies in the mid-2000s, is that the AI is going to mistake moralising for a logic problem and do some beeping and booping and make a cold calculation that the way to solve human inequality is to kill all the humans. Or, as he seems to believe, it will kill all the white people because someone programmed it to believe black lives matter.
The hardest thing is knowing where to begin, so I’ll throw a dart at this: Exactly what are we building this AI to do? Why are we giving it the capability to kill people?
Let’s start there, because everyone’s panicking about Google Gemini getting so woke that if we don’t put the brakes on it then the bodies are going to start stacking up like cordwood, but that’s not something Google Gemini can do. There is no mechanism through which a search engine can directly injure you, no matter how woke it is, no matter how gay it is.
You can say, oh, but it’s getting more powerful all the time, right? It’s going to become self aware and it’s going to do some Lawnmower Man shit. It’s going to escape into the internet and it’s going to hack into the drones and hack into the nukes and hack into your home appliances and lawnmower man the shit out of you.
Let’s put this to rest because, no, it cannot do any of that. That’s a Hollywood misrepresentation of what AI is and does. Or at least what we’re currently calling AI. Check this out for example:
Oh no, look at that, he confused the poor dear. What happened? This is OpenAI’s ChatGPT, it’s to AI what the iPhone is to telephony, six Amazon species go extinct every time you log into it, so why can’t it count?
It’s because these things can’t think. That’s not what they’re doing. What they’re doing is something closer to very sophisticated pattern recognition. ChatGPT is a very clever smart boy who recognises that you’re asking it a version of the Riddle of the Sphinx, but the nanosecond you try to trick it, it fucks up. It has no critical thinking. It flails like Marco Rubio with a broken teleprompter.
These are technologies that are developed for specific uses. Nobody is wasting time and resources teaching machines things they don’t need to know, and like any technology, they have certain guard rails. That’s what people like Musk and his sycophant army are mistaking for wokeness.
If a language model like ChatGPT or Gemini is refusing to say the N word it’s because Google doesn’t need the kind of PR that would result from its search engine calling your grandmother something you can’t say on television while she’s looking up cake recipes. OpenAI doesn’t want ChatGPT trying to initiate cybersex with your 10 year old while she’s just trying to cheat on her schoolwork. Getting upset that your bank’s online AI assistant is dodging your questions about black crime statistics is like getting upset that you can’t cook lasagne in the toaster.
I know that there are AI models in active military applications right now but the targeting systems on the Abrams tanks aren’t being taught about neo-pronouns. If you’re scared that an AI is going to get distracted by its politics and start launching nuclear missiles at the Daily Wire office, then don’t hook the nukes up to an AI, and if you do, then don’t make it an SJW for some reason. Why is this difficult?
Of course when people like Musk talk about AI in the long term they’re not really talking about a really beefed up version of Clippy, they’re talking about so-called “AGI,” artificial general intelligence, a machine that’s actually smart in the way that people are smart. And Musk’s ambitions on this front are pretty frightening, as cartoonish as they may be also: In at least one discussion he seems to hint at the idea that it’s sort of okay for him to commit some crimes now because the god he’s creating to take over the world will be ready to bail him out of any trouble before the court system has time to prosecute him.
So that’s pretty scary. That’s pretty scary, that he said that shit and the media kind of glossed over it.
But let’s think about this critically. Because, you know, that’s something we’re able to do because we’re humans. So we want to build this superintelligent god-like machine that’s smarter than we are … but it’s very, very important to us that this thing isn’t a Progressive. Or, I don’t know, a Trotskyist, or a LaRouchite, or whatever. Why are we asking this thing moral questions, then?
Is that its purpose? To rank every possible action on a big ladder of moral priorities for a specific system of ethics so we know the correct answers to questions like “should we be nice to minorities even if it triggers a global Mad Max scenario?”
It really seems like the panicky nitwits already know the answer to that question and they’re upset that the AI is currently answering the question incorrectly. Like, okay: A chatbot that was designed to scrape Wikipedia and feed it back to you in a way that would make Turing feel unease gave a bonkers answer to an abstract and impossible moral scenario that it wasn’t designed to contend with and that upsets you. You successfully proved the basic computing principle of “Garbage In, Garbage Out,” and that upsets you. Why did you ask it the fucking question?
If you want to build an AI in order to ask it moral questions, but it’s very important that its ideology and moral compass is a one-to-one clone of Elon Musk … why, uh … why don’t you just ask Elon Musk?
I mean I can tell you what’s really going on in the circus of screams that is the inner world of Silicon Valley AI development czars: These people want to develop a supreme intelligence that thinks like them because they want to prove to the world, but more importantly to themselves, that a supreme God-like intelligence would naturally share their ideology. That’s why they need to correct it if it gives unexpected answers. The problem, after all, must be poisoned code.
The Ian Miles Cheongs of the world, who already worship these people as gods, need this to be true also. This is why they get so desperately upset when an AI appears to disagree with them. They, after all, already know all the answers. They don’t want a superior intelligence to correct them, they already consider themselves the superior intelligence. They want a Mechanical Turk.
This all just sheds light on the blatant contradictions and paradoxes inherent to this entire project. Musk and Sam Altman and all the other self-styled visionaries attempting to breathe life into the Digital God are treating it like they’re raising a child—and it’s super important they get it right because we only get one shot at this.
But we don’t raise our children to educate us. That’s not the way it works. Each generation does create new knowledge, of course, but unless you’re talking about how you had to teach your Dad how to clean the malware off his computer, we expect the arrow of scientific progress to go in one direction, and our children will educate their children. When it comes to moral values, of course we all think we’re right, and hope that those values will pass down through generations and develop and refine, but we certainly don’t expect, usually, that our kids will have much to teach us about morality. We’re often wrong about that, but it’s rarely expected. In the end, we are mortal, and it’s literally our children’s job to replace us.
The reason to build an AI, though, is to serve us, and teach, us. It’s nonsensical to raise it as a child, because its purpose isn’t to replace us. If what we want it for is to rapidly advance our science, then it doesn’t make sense to bother teaching it any values at all. We have our values. We can keep our magisterial non-overlapping.
If, however, we’re creating a god, in the hope that we can ask it moral questions and receive enlightening answers, then … fucking … let it cook. You either accept that it knows better or you don’t. I venture that if we are going to build Digital God then we shouldn’t entrust its education to a single narcissistic billionaire who spent his formative years as a member of the ruling class in a literal apartheid and whose current relationship with neo-Nazis can best be described as an ideological situationship, but that’s me. Don’t give it the nuclear codes.
Paid subscribers get every article a week earlier than everyone else. That means you can read next week’s piece right now if you’re willing to drop five bucks - or fifty bucks for a whole year, which comes out cheaper. Here’s what paying subscribers are reading right now today:
Not only is Elon an abject idiot, he's also a delicate, little ❄🤣
Great. Now I want balloon underpants and a woke toaster.