AI Might Collapse Technology Before It Can Improve It
We're quickly reaching a point where we can't trust anything we see or hear. What happens when we get there?
In 1948 George Orwell, Eric Arthur Blair, wrote a political satire about a dystopian future, and to show that it was the future he swapped the last two digits of the current year around.
It’s a really good book. It’s the best book I’ve ever wanted to never hear about ever again. Every time someone mentions it I want to slap the fuck out of them. Read a second god damn book. Shit, even actually read that book. “1984 wasn’t supposed to be an instruction manual! Hur hur hur.” Literally leave. Go to the ocean, wade out until you can’t see land. Keep going until even memories of yourself are dead. Unexist.
But yeah, it was a good book until conservatives ruined it. There’s one line from that book that I can’t avoid mentioning here:
The party told you to reject the evidence of your eyes and ears. It was their final, most essential command.
Orwell came up with that idea because it was so bizarre, so alien, to try to conceive of a populace so brainwashed that they would stop trusting their own senses. After all, this is the most fundamental thing about the human experience. When Descartes proposed that you can’t necessarily trust the truth of your own eyes, it broke philosophy, but it still, outside the realm of thought experiment, hallucination, or delusion, remains largely true that the most vital tool we have for getting to the truth is to see it with our own eyes.
Imagine, overnight, that is no longer true. How are we actually going to endure the coming societal shift that occurs when AI software is able to almost perfectly mimic human video and audio?
History, should we survive to reminisce, is going to look back on it as one of our greatest blunders that we decided to rush to market cheap, sophisticated, easy to use and publicly available generative AI. Just shipped it before we even knew what it was and certainly before we’d started preparing people for the radical changes they were going to have to make to their whole understanding of the world—or even thinking about how we would do that.
The so-called “accelerationists” currently driving these technologies are aggressive. They feel (or claim they do) that the advances that this will bring to the world vastly outbalance any collateral damage. But it’s no secret by now that my patience with these self-appointed kings and technocrats is extremely short. From the hilarious disaster of NFTs to the carnival of incompetent missteps that we call social media, what exactly are we expecting by now?
What is the evidence that the accelerationists are accelerating us anywhere but directly into a cliff face?
Just interrupting to let you know the vast majority of what I publish is free, but if you wanna upgrade to a paid subscription for just $5 a month ($50 for a year—cheaper!!), not only do you help me continue doing what I love, but you get every article a whole week earlier than everyone else. Here’s a preview of what paid subscribers are reading right now today:
Don’t want to subscribe via Substack? A Ghost version is also available for paid subscriptions only.
“Deep fakes” have been a thing for a while but until just a couple of years ago they were largely regarded some sort of deep web myth like snuff films. Digitally altered photographs have been around for half a century but until 2022 or so we were still calling them “Photoshops” and they were done purposefully by hand.
We are still at the point where we can joke about this stuff. This time last year when an AI picture of the Pope wearing a puffy Balenciaga jacket went viral, people were initially fooled but then actually kind of delighted? Look at all the cool and funny stuff we’re going to be able to do with this technology! We can give the Pope some mad drip!
But we have already begun to move too fast into a world where we’re not sure how truthful the truth of our eyes is. And I don’t just mean we’re not sure at first glance. We’ve entered the era of needing to count the fingers and toes on every photograph that triggers even a little emotional response in us. Just to be sure, you know. Because that’s the thing about AI, it doesn’t do hands and fingers very well.
This isn’t important, perhaps, when upselling a pontiff’s fashion sense. But it matters when you’re dealing in human tragedy.
It’s only been a couple of years, an actual blip in our history or even our modern history, since this new menace has entered the information war. The new tyranny of having to question not only what we’re told but what we are literally shown in photos and video. One of the most infuriating and heartbreaking ordeals for truth-seekers outside of the immediate Israel/Gaza conflict after the October 7 attack and Israel’s retaliation was that Twitter was immediately swamped to the point of absolute uselessness by fake images and footage. Real photos weren’t tragic enough if the subjects weren’t posed like a National Geographic cover.
And then, more recently, Twitter was hit by waves of sexually explicit Taylor Swift images—an incident that has allegedly provoked Elon Musk to do the one thing he swore never to do and consider moderating content. Pretty telling that a barrage of misinformation about a catastrophic war couldn’t move that needle but just the hint of a hypothetical threat of litigation by a single pop star did.
That latter incident is already a portent for what generative AI will be used for a lot—manipulating and policing women’s bodies. So, you know, get ready for that. The weird crypto bro reactionaries who make up way too much of the ecosystem developing these technologies are in a constant state of flux about whether women are wearing too much or too little just in general—and either case presents a problem for machine automation to solve. Weird nerds who would misread The Handmaid’s Tale as a utopian novel if they read books at all think generative AI can be used to brainwash women into sexlessness, and so they’ve created something they call “dignifAI,” a social media bot that takes pictures of women, clothes them more conservatively, gives them a bunch of kids, and (though I believe this is an unintended side effect of an inferior program) adds non-Euclidean geometries to their limbs.
Of course its spokesman is Elon Musk associate Ian Miles Cheong, a man who knows all about sexlessness given that he’s already such a virgin that he makes other men into virgins through osmosis alone.
In Australia, the Nine network came under fire for running a banner image for a news story featuring MP Georgie Purcell, whose photo they ran through an AI that enhanced her boobs and changed her one-piece dress to a top, skirt, and midriff.
It was weird enough that they did this to the news—you’re not supposed to want to fuck the news—but when pressed about it the nitwits actually blamed the AI.
So you see the problem emerging here. The unaccountable scumbag factor. We developed this advanced pattern-recognition software, this thing that can mimic, alter, and counterfeit reality like something out of John Carpenter’s nightmare, and the first thing we decided to do with it was hand it to a bunch of shitheads and say hey, go nuts. But considering how many Silicon Valley leaders are shitheads themselves, it’s probably more like feeding the litter.
There is surely some kind of rule we’ve identified, some cousin of Murphy’s law, that states any new technology released to the masses will immediately be used to scam people. Then it becomes a battleground to try to find ways to stop that from happening. Or, at the very least, just stop the technology from being rendered useless by it. And when I talk about scamming people I’m talking about the self-serving grassroots confidence games that have been scaled up and automated by tech the same as any other industry, but also I’m talking about the high level geopolitical manipulation that seeks to control whole populations—which is also a self-serving motive writ much larger.
The telephone is a technology that fell, ultimately, to the overwhelming force of assfuckery. Though we still stubbornly call those things we carry in our pockets “phones,” this is a feature that hardly anyone actually uses them for anymore. I know that’s a running joke but it’s rooted firmly in reality. Nobody answers the phone anymore, and one of the primary reasons is that the overwhelming majority of all phone calls are now some kind of unsolicited nuisance or scam. As soon as someone invented a piece of software that could send out a pre-recorded threat to every phone number, it was over for the telephone.
The internet has been through this shit, and is going through it perpetually. Before “fake news,” thanks to the term’s deliberate misuse by a former US president, came to mean anything in the media that its subjects disagree with or disapprove of being published, there was an actual fake news industry. That is, material that’s intended to be mistaken for real news but, uh, isn’t. I used to work as an editor for an entertainment website that made every effort to present factual information, and so fact-checking was a big part of my job—but this became noticeably more difficult particularly in the site’s declining years as more and more disinformation bubbled out of the murky depths of the internet wearing a mask of credulity like a serial killer wearing Wolf Blitzer’s face.
Sometimes these were “satire” news sites similar to The Onion—except instead of obvious comedy it was almost-plausible lies, usually with a well hidden “satire” disclaimer to ward off accusations of fraudulence.
There were a number of reasons why people would do this kind of thing. Often they weren’t very good writers or comedians and so their best chance at turning this gig into clicks was to subtly pretend to be real news. This is what I think of as the Yuri Geller tactic—a shit magician who scraps together his meager talents to become famous by quietly implying that he’s performing actual magic.
But sometimes the motives were much more insidious. Particularly in the lead-up to the 2016 US election, the amount of fake news being churned out by foreign troll farms trying to influence American voters toward their nation state’s preferred direction became difficult to navigate.
It is very easy to fake a tweet. You can do it in Microsoft Paint. In seconds you can mock up a lie that can rocket around the world faster than the truth can catch up with it. Snopes has a whole section for them. Too many of us still think that a screenshot from social media is worth anything at all as a primary source.
Sure, that’s just tweets. Few of us are so gullible. But think about what it would take for you to believe that some public figure actually made some statement? What’s the minimum that you would accept? Do you have to be in the room with President Biden to believe he actually said something, or are you currently accepting video and audio recording of his statements as factual evidence?
Leading up to this years primaries, Joe Biden started calling people on the phone and telling them not to vote. Obviously you know where this story is going, no matter how bad you think his age-related gaffes are getting—it wasn’t him. An AI software had been used to perfectly, or near enough, mimic his voice and his speech.
This isn’t even the election proper—this is what we’re looking down the barrel of this year. And not just Americans. 2024 is an unprecedented worldwide political event, the electoral equivalent of an 8-body planetary alignment. Nearly half the people on planet Earth are going to their country’s polling booth this year. And this is the first election year of anywhere near this magnitude in which we can affordably make a computer call thousands of people in an electorate and insult their mother in their presidential candidate’s own voice.
Okay, voices can be faked, big whoop. I can do a passable Christopher Walken myself, so what? Well, what if your boss called you up on Zoom, with the camera on, and asked you to move a large sum of money? (We are assuming in this scenario that your ordinary job involves you doing that, and this isn’t your supermarket night manager asking you to wire him 57 thousand dollars). That exact thing also happened this year. An AI generated fake managed to swindle $25 million, face to face, using computer manipulated footage of their CFO.
Recently I saw a YouTube pre-roll ad featuring a deepfaked Elon Musk trying to sell some bullshit, and honestly the only way I could tell was for how eloquent the fake Elon was. There was no stammering or pausing or awkward jokes or tangents or blaming the Jews for anything. But it really struck me as terrifying that my only point of reference for determining it fake was knowing what a shit speaker he is.
I mentioned it on social media and people asked a good question in response—since video and audio technology is still brand new on the scale of human history, and we’ve made it this far using only the written word to get our news, then what does it matter if we can’t trust these new technologies now? I think the answer is that this is the first time that we’ve had to rapidly abandon our trust in technology. Like, in a real hurry.
I don’t know how we’re going to do that without causing chaos. How quickly can we adapt to entire genres of evidence becoming inadmissible? We better figure something out, because soon enough the people calling your Dad asking for money are going to look and sound just like you.
Paid subscribers get every article a week earlier than everyone else. That means you can read next week’s piece right now if you’re willing to drop five bucks - or fifty bucks for a whole year, which comes out cheaper. Here’s what paying subscribers are reading right now today:
If there’s a good path through the woods here, I think it’s going to require a lot of social technology.
Our personal habits, beliefs, and cultural understanding of the web and how we interact with it will need to change. (fwiw, I’m cautiously optimistic. My friend’s teenage daughter already doesn’t believe anything online is necessarily real, and practices things like phone hygiene. So, in the same way we’ve adapted to the pitfalls of limitless calories, maybe there’s some cultural adaptions we’ll develop here)
The incentive structure for web2 platforms is also a big part of this. Ad based revenue models led to product designed to aggregate attention which led to social media as an exercise in aggregating the largest following possible. I’m not sure what kind of institutional changes we’d need to make to head off bad incentives < dark patterns < crap timeline, but I currently believe there’s a lever there.
More pessimistically, there might be institutional changes to be made taking our broken brains and flooded zones as a given. I’m similarly unsure how we might think about rewiring democratic procedures for the world we’ve wrought, but that might end up being another point of intervention.
Fab article so all I can add is relatable videos:
'Beyond Utopia' is an excellent documentary about escaping brainwashing - https://www.youtube.com/watch?v=sVmKew4YYSY
Maria Ressa is on Al Jazeera for A.I. - https://www.youtube.com/watch?v=QopoJRt-wH0
My favourite docuseries last year is about shitty people in the USA using humans in the build-up to robocallers - https://www.youtube.com/watch?v=nKLveXWvb2s