A moment of uncanny valley befuddlement
Read to the end for Gupitaro
The AI Audio Deepfakes Aren’t As Funny Anymore, I Guess
I essentially organize internet users, regardless of politics, into two camps. One group are the people who just don’t think much about the garbage they’re bombarded with both online and — because real life is now downstream of the internet — offline. Whether that’s due to a lack of education, interest, or free time, there are just a whole lot of people using the web around the world that don’t have the luxury of reflecting on why a piece of content is making them feel a certain way. They see stuff, they react, they share it, they move on.
The other group are the hall monitors. The folks who agonize over what’s online every day. I’m one of those people. Though most of us who “work online” love to martyr themselves and endlessly complain about how “broken their brains are” or whatever, make no mistake, it’s a privilege.
And, for many years, I’ve heard the argument that increased “media literacy” would shrink that first group, which is much, much bigger than the second. And, perhaps it can and has, to some small degree. But I’m also not convinced that you can teach that sort of thing away at a rate that makes any real difference. I also think that many of the liberal-leaning academics, in particular, who bang on about this don’t like to acknowledge that oftentimes better media literacy just creates better propagandists. Which, thanks to social media, we all are now.
Making things more confusing, the internet isn’t a completely serious place. People from all kinds of backgrounds and cultures use it to goof off all day just as often as they use it to do things that are important. And they don’t really think or care about what their posts mean in aggregate. Nor should they, I think. We’re all just having fun online. And a lot of journalists, especially, like to moan about people sharing hoaxes and other goofy stuff during breaking news events, but, I mean, journalists are the ones trying to run a serious editorial operation on a cartoon bird site.
All of this leads me to my fairly complicated feelings about the fake AI Joe Biden clip. It was shared by a group of right-wing influencers and it shows Biden declaring he’s reinstating the draft. I can’t say I fell for it exactly when I first saw it, but it did confuse me quite a bit at first. And it was that moment of uncanny valley befuddlement that freaked me out a little bit.
If you go through the quote tweets on the Biden draft deepfake, I’d say 99% of the users sharing it are very aware it’s fake or, more crucially, think that it’s something that just hasn’t happened yet. Which is, in my experience, how most people engage with misinformation and disinformation.
The narrative for the last decade was that the internet is radicalizing you. I’ve read — and written — countless stories arguing that idea. And maybe it was true at a certain point or still is true at the very top of the funnel. But I think most people are online enough now to actively seek out content — real or fake — that makes them feel good (even if “feeling good” for them means feeling very bad and angry), entertains them, and reaffirms their worldview.
Fact-checkers, especially the now-very-tired millennials that spent the 2010s in the content mines waging digital warfare against cartoon frogs and anime nazis for $60,000-a-year salaries and all the free seltzer they could drink, still think of “misinformation” and “disinformation” as the content equivalent of toxic runoff polluting a river. There’s an unspoken assumption that a journalist can hold up a glass of glowing green water and embarrass a company or political institution enough to make them clean it up. There’s an assumption there’s some kind of clean, healthy river underneath, if only we could get to it. But I wonder more and more if that was ever true. If it was, I’m sure it’s not anymore.
Nowadays, if someone tells you that Democrats eat children and shows you an obvious deepfake from a QAnon Telegram group of Joe Biden talking about harvesting adrenochrome, that’s not them falling for misinformation or disinformation. That’s just them telling you what they believe. And you pulling up a digital forensics report that shows that the video is fake isn’t going to make them stop believing that. Sure, that video might be fake, but what it’s depicting is assuredly real… to them.
So, as you can see, I’ve really walked myself into a corner here because I actually really like the Biden deepfake video memes going around. Well, up until this one. I’ve shared a ton of them in this newsletter. I even came across another this week that feels microtargeted to my exact sense of humor. And for the most part, these videos are basically just slicker versions of JibJab clips and ultimately harmless. I also think deepfaking public figures is somewhat more responsible than deepfaking private citizens if only because it’s easier to refute.
But I also spent most of the 2010s covering how algorithmic hoaxes like this exact sort of thing caused political havoc around the globe which means I can pretty confidently predict where we’re heading. WhatsApp’d AI deepfakes causing riots, ChatGPT-driven QAnon-style conspiracy cults, competing fascist large language models (read further down for more on that), and a whole new dimension of automated astroturfing — it’s all coming, if not already here. And I doubt Americans will ever get decent regulation for dealing with all of these challenges. We haven’t figured out how to regulate the old problems, let alone even fully understand the new ones, yet. Though, maybe Europeans will get some protections soon for this kind of thing and those will trickle down to us in the States USB-C and GDPR style. I know China already has them.
Of course, maybe worrying about all of this is just an exercise in futility. At the top, I described myself as one of the hall monitors, one of the people who agonize over what’s online every day. What I didn’t say is that those people, for all their resources and influence and privilege, don’t ever really decide what happens online. Not really. It’s that first group, the folks just living their lives and using new tools in ways that makes sense to them, that really decides where we end up. Deepfakes and AI audio and video will find ways of fitting into our lives — or won’t — and that’ll be that. It won’t be all good, but it probably won’t be all bad either. And then we’ll do it all again in 15 years when the next thing arrives.
Think About Subscribing To Garbage Day
It’s $5 a month or $45 a year and it helps keep this newsletter chugging along. Paying readers get a bonus weekend edition and Discord access. What a deal! Hit the green button below to find out more.
I’ll Be In ETHDenver This Week!
I’ll be at ETHDenver until next Monday. Shoot me a message if you’re in town! In fact, I’m on my way there right now and trying to send this out on maybe the worst inflight Wi-Fi I’ve ever used. I also had a flight get delayed five hours and briefly had to assist with an inflight medical emergency. It’s been a wild day! So I apologize for any jankiness. Anyways, I would love to meet up and talk about the best ways to keep your doggos and rumble kongs safe while playing Dookey Dash.
Twitter Users React To TikTok Videos The Same Way Fox News Viewers React To The News
I’ve had a few readers message me something similar to this idea and I think I’m finally ready to take it out for a spin.
Now, first, Twitter is not the first social network or internet community to enter a period of deep decline. It happens all the time and, also, it tends to play out exactly like this. The community gets angrier, more bitter, and, oftentimes, becomes fixated on another website. 4chan, Reddit, and Tumblr have all gone through this in their own way. And it’s not always terminal. Reddit and Tumblr seem to have come through the other side.
That said, the difference with Twitter has always been the irl value of its users. It’s why Elon Musk wanted to buy it in the first place. And that has meant the site’s slow decline has felt a little more cable news-y. And now that every other tweet is viral garbage or weird ads, it really feels like Fox News on there.
But its users are still from across a broad political spectrum, so the whole app can’t just radicalize like Fox has. Instead, Twitter has sort of become one big collective crank mostly about TikTok users, but also really about anyone or anything anywhere.
I suppose the best way to describe this would be “the boomerfication of a social network”. Which is when an influx of evergreen content and collective crankery scares off young people from joining the site.
Anyways, I do think we need to actually be meaner to the meat and butter guy. Of all the TikTok weirdos that have broken through their containment unit and taken over my Twitter timeline for days at a time, I think he’s my least favorite.
A Quick Update On My Twitter Algorithm Experiment
My own thread about my Twitter algorithm experiment didn’t go super viral but it got some attention and many people reported seeing it in their For You pages. Even better, I had several folks tell me they tried my little hack out for themselves and said it worked like a charm. So I definitely think we’re close to cracking the algorithm.
But there’s one thing that people might not have really understood. The algorithm seems to be counting replies or over-promoting tweets with a lot of replies. This may explain why big accounts from journalists and right-wingers aren’t getting a lot of pick up anymore. Because people typically retweet those accounts. Whereas leftists or unhinged stan armies, you know, spend all day fighting with each other, so they’re probably fine.
In terms of getting enough replies for it to matter, I think you really need to puff out your chest and say something loud enough that people come tell you you’re wrong. Or, better yet, say something about someone else’s tweet that sends thousands of users to go dunk on them.
I’m not advocating for this! I’m just saying that that appears to be how the site works now.
The Dilbert Guy Goes Full Mask Off
Here’s a good rundown of everything that’s happened so far in the Scott Adams cancelation news cycle if you’ve totally missed this. I don’t have a ton to add here except that I’m interested to see if the right wing adopts Dilbert as some kind of social cause. Though, I haven’t seen any real rumblings of that yet.
I will say that during an interview once over the phone years ago, Adams tried to hypnotize me. Which was weird. He literally started off the interview with, “a fair warning: I’m a trained hypnotist.”
Meta Isn’t Launching A Chatbot, It’s Launching “Personas”
On Monday, I wrote about Meta’s forthcoming consumer AI product, which is supposed to roll out soon. Both Axios and The Verge have noticed that Zuckerberg is referring to what are assumedly chatbots as “personas”. Which is funny and very Meta. Further down the announcement, Zuckerberg talks about how Meta’s AI offerings will power “multi-modal experiences” and “futuristic experiences,” which is, once again, also very Meta.
So far, when talking strictly about chatbots, there seem to be a few consistent things people use them for: productivity and convenience, creative experimentation, and to have sex with them. I assume Meta won’t let you have sex with their personas, so that leaves the other two. And, even though Meta has tried to enter the world of office suites, like with their VR-powered Horizon Workrooms app, I don’t see people opening one of their apps to work unless they, you know, work on one of their apps. Which really leaves us with “creative experimentation,” but I have serious doubts about that, as well. Most features Meta rolls out to users become commodified and run into the ground almost immediately.
But I also think that Meta is making a larger mistake by assuming AI can easily slot into their existing ecosystem. And they’re not the only one thinking this. I’ve seen lots of people assume that users are going to want to share AI text or AI-generated images on their social platforms. But the absolutely microscopic blip of interest around AI avatars, which swept through social platforms and vanished almost entirely overnight a few months back, should tell you that that’s probably not the way to go.
Viral media is/was all about identity. You share a headline or a quiz result or a funny video because somewhere in there it reflects something about you. This is especially true for Meta’s platforms like Facebook and Instagram, which are still, ostensively, places we share updates about our lives. But I’m not so sure the way we interface with AI will be as tied to our identity. At least, our public identities, I suppose.
After Failing To Do Anything Meaningful With Twitter, Musk Plans To Fail At Doing Anything Meaningful with AI Next
It’s almost too tedious to even include, but we can all agree this isn’t going to ever happen right? At least, Musk is never going to do it. That said, as I wrote over the weekend, Gab CEO Andrew Torba also says he’s trying to build a right-wing Christian AI. Well, in the immortal words of Panic! At The Disco (the original lineup), build god then we’ll talk.
New AI Twitch Stream Dropped
It’s called UnlimitedStream and it’s the Steamed Hams Simpsons short but recreated over and over and over again by an AI. It’s running on GPT-3, Tortoise text-to-speech, Unreal Engine, the Natural Language Toolkit, and Python. Here’s a good and very NSFW clip from the stream.
This Is Probably The Best Look At How Generative AI Works I’ve Ever Seen
First, let’s get this out of the way now. This Corridor Crew video is controversial and it’s also doesn’t look like “anime” and I don’t think it changed “animation forever,” as the title claims. But, if you’re looking for a really clear demonstration of how generative-AI tools like Stable Diffusion work, this is probably the best I’ve seen.
Also, unrelated to the AI stuff in the video, I thought the section around the 15-minute mark where they stage a virtual environment in Unreal Engine and shoot it like a physical location was just very neat.
They said that the whole project took a team of around five people about two months to complete and, yeah, for AI art and for a YouTube short, it looks pretty good tbh. As in, it looks like passable crap that can be iterated at scale endlessly. But the reason the video is so controversial is because the “anime” effect they used was created by scanning frames of the anime Vampire Hunter D (which looks much better than what Corridor Crew came up with). Stable Diffusion then tries its best to map each frame of live-action footage they shot in the style it learned from the Vampire Hunter D stills.
Here’s my question. If you’re going to spend months making a video like this. And you’re going to pay people to make this and even hire a guy to do the music to go with it. Couldn’t you achieve the same effect with an original artist? Like unless I’m missing something, couldn’t you hire a human artist to draw keyframes and use those to style your rotoscoped footage? Also, there’s a whole issue where the AI can’t render a beard because no one in the source anime had a beard. So they have to go and make new images to teach the AI what a beard looks like. Literally all of this would probably have been faster with a human artist working with them.
Some Stray Links
“Instagram users are being served gory videos of killing and torture”
P.S. here’s Gupitaro.
***Any typos in this email are on purpose actually***
I know I'm not the sharpest spoon in the drawer, but I just don't want to question every post and video to wonder if it's AI or not. AI proliferation might be the end of my social media experience. I read a post from person I enjoy reading this week. At the end of it, he noted that if there were any quirky parts, his post was developed from AI. So, I am not too keen on reading him any more. Maybe younger people won't feel that way at all, and that's ok with me. I think the Biden thing about the draft is not funny, predictive, nor helpful... and could be harmful. But, the horse is out of the barn, isn't it? Interesting post. Thanks Ryan.
This is such a tiny detail in this newsletter, but I would love it if you could expand on the idea of better media literacy just creating better propagandists. Do you mean that mastering the tools of social media means becoming better at turning yourself into a brand?