• Garbage Day
  • Posts
  • Everyone stop being ridiculous for like five minutes

Everyone stop being ridiculous for like five minutes

Read to the end for Frank Reynolds in "The Last Of Us"

Some Complicated Thoughts On AI And Moderation

For the last several weeks, over in the Garbage Day weekend edition, I’ve been slowly working through my thinking about how we should moderate generative AI. I’ve been trying to silo my writing about AI over there because I’ve heard from a few readers that “it’s a big drag” and that reading about AI makes them “want to die” (I’m paraphrasing).

But seeing as how YouTube, this week, released a fairly big update regarding synthetic content, or content produced by generative AI, I suppose it’s time to dive back in. Also, I managed to put a lot of what I’m about to write out here in a YouTube video, so if you’d prefer to watch my hideous visage say all of this, rather than read it, here you go.

Alright, let’s spin through we’re were at right now in the world of generative AI, shall we?

Almost every big AI model is full of stolen data and organizations like Andreessen Horowitz aren’t even pretending to care anymore. There are bevy of lawsuits led by authors like George R. R. Martin that might change that, but I’m not holding my breath. My hunch is that we’re just speedrunning the Napster era and will end up with an AI-protected class of creators and institutions and a wilderness of AI-powered piracy beyond that. In fact, seeing as how I can’t currently generate a picture of Mickey Mouse dabbing in Midjourney, we’re probably already there. There’s also President Biden’s new AI executive order, which contains a lot of nice ideas about how to make AI tools safer, but I’m waiting to see what actually materializes from it. 

Meanwhile, major online platforms are struggling to figure out what, if anything, they should do about the spread of synthetic content. Most recently, YouTube launched a new policy requiring creators “disclose when they've created altered or synthetic content that is realistic, including using AI tools.” Which is just as confusing as every other platform’s stance on AI content. Last month, Facebook launched a generative-AI tool for advertisers and then, this month, banned political advertisers from using AI-generated content. Amazon has a generative-AI ad tool and seems to want everyone to use it. TikTok has only specifically banned deepfakes of “non-public figures” and unsanctioned AI-generated endorsements featuring public figures. And Spotify says they aren’t even considering banning AI-generated music.

Generative AI is also a privacy nightmare. Which is getting increasingly hard to ignore considering the early integrations with AI we’re currently seeing are happening inside of apps like Photoshop or Microsoft Office. Beyond that, basic plans for ChatGPT aren’t encrypted and user inputs have already leaked once. According to a September terms of service update, it appears Adobe has some sort of ability to view — and block or throttle — inputs for its generative-AI fill tool. And while Microsoft has the most complete AI privacy policy, it’s based on a system it claims ensures that “data will not unintentionally leak between users and groups.” So you can decide on how much you trust that.

There are also the regular waves of stories about generative AI being used by bad actors. Last month, after the release of DALL-E 3, there were a whole bunch of stories about how the AI was pretty good at creating pictures of Kirby doing 9/11. And soon after, there was another news cycle about 4chan users figuring out how to make racist Pixar movie posters. Except, all of those stories could have been written — and were actually — about Photoshop on Reddit 15 years ago. That’s because all of this, everything from Biden’s AI executive order to the new mealy-mouthed platform policies to the endless stories asking us to pretend to be scandalized over pictures of Kirby flying United 93 are based on the same incorrect assumption that generative AI is unique in any way.

There’s an old journalism joke that reporters cover every new election according to the rules of the previous one. But I think the tech press does the same thing. Which explains why most of the stories you read about AI right now use the same whack-a-mole content cop strategy most news outlets and research groups spent the 2010s using to cover platforms like Facebook or Twitter. Now they’re breathlessly writing up every instance of an AI producing A Forbidden Image. And what’s worse is this attitude helps tech companies continue to undermine labor and consolidate lobbying power, allows politicians to keep dragging their feet on writing real legislation for the internet, and provides fantastic cover for online platforms that still don’t know how to moderate themselves. I have yet to see anything produced by generative AI you couldn’t do with Photoshop or After Effects or, like, Wikipedia. And if everyone stopped being ridiculous for five minutes, we’d all realize that this tech hasn’t introduced a single new problem. We still just have same old ones we refuse to deal with!

And so, my big hot AI take here is that there’s actually nothing new to moderate. I mean, my god, OpenAI is literally using the same Africa-based third-party moderation contractors that Meta and Google use. It’s all just the same stuff with a new Sci-Fi coat of paint. And I think we can all agree that these companies are never going to solve any of the problems they create themselves, especially now that they have a shiny new fad to rally around. So, yeah, I think it’s time to regulate AI companies, but it’s also probably time to regulate everything else too.

The Last Main Character

—by Adam Bumas

This was shared in the Garbage Day Discord by user mgoldstein and most folks in the chat were pretty shocked that it only happened a year ago. The “chili neighbor” discourse, if you don’t remember it, went like this: A Twitter user named Chinchillazilla made chili for the college students who lived next door to her. Once she posted about it, it morphed into a bizarre litmus test for people’s opinions on everything from food to government to childcare. Here’s a good Washington Post piece about it. (It was so out of pocket that Ryan didn’t even cover it at the time.)

The chili neighbor debacle started started 18 days after Elon Musk became CEO of Twitter and, at the time, was used as proof that the site was still churning out main characters, regardless of the new ownership. So what’s changed since then?

I would argue that the main difference is that, very simply, things on Twitter, now X, don’t feel quite as important anymore. Even though most people and organizations still have a presence on the site, there’s so much misinformation and spam that popular posts don’t become major news stories in the same way anymore. A year ago, every tweet you were seeing on your All Of The News App felt like it had the chance of doing big enough numbers to become the number one story in America.

So is there any cultural legacy for all these discourses? Do they even add up to anything in retrospect? Well, I tend to agree with writer Rebecca Jennings, who wrote about the chili neighbor a month after it happened, saying “our thirst for drama is really a thirst for punishment”. Twitter collapsing hasn’t changed that, though. In fact, posting is actually treated as more and more meaningful with time, and posting something bad becomes a correspondingly worse sin.

And you can see it now in the most extreme responses to the conflict in Gaza. People are treating their takes about it with the same importance as the actual war, because it’s coming to them the same way.

Want to check out the Garbage Day Discord? Hit the green button below to find out more!

If You Know What All Of These Words Mean, Congrats, It’s Time For You Go Spend Some Time Outside!

I Think Meta Wants Threads To Be The Gmail Of ActivityPub

Back in September, Meta published a big long blog post about building Threads. And about halfway down, there’s a section titled, “The future is decentralized,” which has a really interesting nugget that was recently shared by journalist Taylor Lorenz:

Our goal with Threads is to make social content as interoperable as email. We are working on the ability for Threads to integrate with ActivityPub, the open, decentralized social networking protocol. Once that happens people will be able to enjoy the best features of Threads across platforms.

It’s a big promise tbh, especially coming from a company like Meta. It’s hard to name a company that has done more for the, uh, un-interoperability of the web than Meta. But I actually thought the best take on wwhy Meta would be so interested in this came from my buddy Alex Petros, a great developer I’ve worked with a bunch.

He wrote on Threads, “Most people don't realize this because it happens behind the scenes of Gmail, but to send email now you have to pay this shadowy cabal of ‘verified’ email middlemen to get past the recipient's spam filter. Even if you trust Meta's good intentions, the scenario they're describing is analogous.”

As Petros sees it, if Meta can make itself the Gmail of ActivityPub, they suddenly have a brand new arena they get to be in charge of. Of course, all of this is based on the still extremely fuzzy idea that people actually have a want or need for a feed-based microblogging service the way they do email (I don’t think they do).

Horse AI

An X user named @m1das_ow2 recently shared screenshots showing how a Discord user named Reecepedia has spent the last eight months training ChatGPT to be a horse. The AI can only neigh and whinny and write basic actions that horses can do and if it doesn’t, Reecepedia yells at it.

I take back everything I said at the top. We are simply not ready for the dangers this technology poses to society.

The Mystery Of Missing Tumblr Traffic

A slight correction to my Monday post about distributed content. I wrote that I have never received any traffic to Garbage Day from Tumblr. Well, according to this callout post written about Garbage Day on Tumblr this week, that’s not true. I also got some nice messages from Tumblr users saying they discovered me through the site.

Users started doing some digging and it seems like after Tumblr was acquired by Automattic, Tumblr traffic switched from showing up as “tumblr.com” and started showing up as “href.li” on most analytics dashboards. Substack doesn’t really provide a totally complete traffic dashboard, so whatever traffic I’m getting from “href.li,” it must be grouping together with something else.

I, personally, understand the desire to obscure tracking, but I also have to imagine there’s a better way to do that than by making your site’s still pretty massive user base effectively invisible on the internet!

But anyways, I have reversed course on my decision to give up on Tumblr as a place to promote Garbage Day. See you on the dashboard.

An Incredible Nine Minutes Of Tetris

This clip is from the 2018 Classic Tetris World Championship. Jonas Neubauer was 16 at the time and his opponent was seven-time champion Joseph Saelee. You can watch the whole match and post-interviews here.

This Is The Most Impressive Fursuit I’ve Ever Seen

A 19-year-old furry from China named Vauk has been building an extremely advanced wolf costume. It has motion-detecting eyes, a night vision mode, a quad-core processor inside of it, and an internal cooling system. Oh, it also blinks when he blinks.

Even cooler, Vauk is patenting and then open-sourcing most of the tech used in the suit.

A Good Post

Some Stray Links

***Any typos in this email are on purpose actually***

Reply

or to participate.