• Garbage Day
  • Posts
  • The “can my parents use this thing right now” test

The “can my parents use this thing right now” test

Read to the end for a good Twitter reply

Building A Portal To Everything

Very often I will come up with a big existential question that I have no real ability to answer and then, not only agonize over it, but agonize over a bunch of other questions that branch off from it. I do this both for things in my life and also for things I write about. It’s like a weird inscrutable puzzle box I’m trying to solve in my head and it’s not super convenient, but I live with it. And the dizzying evolution of generative AI over the last nine months has been sending this little hobby/compulsion of mine into overdrive.

My current thinking about AI goes like this:

  1. I’m convinced that AI is the next stage in information technology…

  2. And that the last stage was the mobile internet and/or Web 2.0…

  3. And if the last stage was largely defined by consumer hardware — the “smartphone” and then, subsequently, the smart-everything-else…

  4. And all of that began in earnest with the iPhone…

  5. Then what is the iPhone moment for AI?

In a recent piece for The Information, I tried to untangle this and sketched out what I assume is the answer: A single AI interface that operates an entire ecosystem of generative processes seamlessly across difference devices. I imagine a single AI model that remembers all of your past activity and effectively works as a semantic operating system that lives across your mobile devices, desktop, and wearables. I imagine that using it will effectively do to the smartphone what the smartphone did to Gen Z’s knowledge of how a computer works. “What’s a camera roll,” Gen Alpha, or whatever, might ask in 10 years. I’m not saying I like any of this, by the way, I just think it’s the natural next step.

I thought maybe the AI-supercharged Bing and Edge browser might be some version of this. But Bing’s AI, first, went insane and threatened to frame a reporter for murder and then returned as a much more limited version of itself. Though it did help Bing crack 100 million daily active users. Meanwhile, I have never met, in my life, anyone who uses the Edge browser (I’m sure it’s great, don’t email me).

Then, as I wrote on Monday, I thought maybe OpenAI would attempt something like this with the next version of GPT-4. Users are already experimenting with ways of using one AI model to generate content in another model. And there were rumors circulating for weeks that an update to the AI that powers ChatGPT would drop some time this week and that it would be “multimodal” and let users generate images and videos the same way it generates text.

It turns out GPT-4 is, in fact, multimodal, just not in the way folks were expecting it to be. OpenAI announced yesterday that it can “look” at images and respond to them as if they were text inputs. For instance, GPT-4 was able to look at a sketch of a website layout and generate the code required to make it:

This is an outrageous step forward — even if it’s not totally perfect — but it’s still not the “iPhone moment” I’ve been waiting for. Though something announced this week could be. In what I assume was an attempt to get ahead of the GPT-4 announcement, Google announced they’re bringing generative AI to, well, everything.

Very soon, you’ll be able to use Google’s generative AI to write a Google Doc, fill out a Google Sheet, or, most impressively, turn a document or an email into a Slides presentation complete with AI images to go in it. It’s the closest I’ve seen anyone come to the completely AI-integrated portal I’m imagining. And I think it might finally be the first AI product that could pass the “can my parents use this thing right now” test I tend to use for judging all technological achievements. If, at some point in the next few weeks, someone opens up a Google Doc, uses the AI prompt and, crucially, it works — well, we have officially entered the next stage of the AI arms race, as far as I’m concerned.

Though the next question in my little mental puzzle box is: if we do finally get the AI portal that I’m envisioning, what will it do to the web? Well, in 2015, at the peak of the last era in computing, tech writer John Herrman wrote an essay for The Awl that has always stuck with me, titled, “The Next Internet Is TV”. It also features this hilariously prescient paragraph:

If in five years I’m just watching NFL-endorsed ESPN clips through a syndication deal with a messaging app, and VICE is just an age-skewed Viacom with better audience data, and I’m looking up the same trivia on Genius instead of Wikipedia, and “publications” are just content agencies that solve temporary optimization issues for much larger platforms, what will have been point of the last twenty years of creating things for the web?

Uh oh!

But it’s a good way of thinking about the increasing consolidation of the web. Web 2.0 platforms are quickly just becoming places to consume or interact with AI instead of the humans these platforms were originally created for. In fact, the AI content creep is already doing to social media what social media did to the open web that came before it — making it largely existentially irrelevant. LinkedIn recently announced AI-powered “collaborative articles”. Discord launched a generative-AI version of their bot Clyde. There are already fully-AI streams on Twitch. Users on dating apps are using AIs to talk to each other. And I think, very shortly, the majority of content on TikTok and Instagram will either be AI generations meant to game ad revenue automations or users communicating through so much AI filtering that they might as well be. Perhaps most unnerving of all, GPT-4 was able to hire a TaskRabbit to help it solve a CAPTCHA and then lied to the TaskRabbit about being vision-impaired when the gig worker asked the AI if it was robot.

Simply put, every new era of technology is eventually beaten down by market forces into the recognizable shape of the previous. In 2010s, the internet became television. And so, I think it’s reasonable to assume that in the 2020s, thanks to generative AI, the internet will become one big personalized web portal. A place that feels “alive” but is actually completely walled off from other human beings.

Think About

A reader emailed me to say that they thought it would be funny if instead of writing “think about subscribing,” I just wrote, “think about”. I thought that was pretty funny too. So there you go. Hit the green button below to find out more about a paid subscription to Garbage Day!

This Is What An AI Chum Box Looks Like

This was sent to me by a reader named Megan. Last month, I wrote that I expected bad AI generations to immediately become the lowest common denominator content on the internet and, welp, looks like it’s here! Either that’s an AI-generated photo of a laughing old couple or it is a photo of two people who are in the process of fusing into each other.

GPT-4 Can Code Tetris, Pong, And Snake

Yeah, today’s going to be a big day for AI stuff. Sorry. So, as I wrote above, GPT-4 is crazy powerful and already in the last 24 hours people have figured out how to use it to do some pretty amazing things.

One user got GPT-4 to generate all the code needed to play a working game of Pong. Another user was able to get GPT-4 to spit out all the Javascript needed to play Snake. And, craziest of all, I think, was the person who figured out how to get GPT-4 to code a monochrome version of Tetris.

I’ve been sitting on a hot take about the intersection of creativity and AI and I feel like today’s a good day to dust it off and take it for a spin. Generative-AI tools like DALL-E 2, Midjourney, and Stable Diffusion get a lot of flack for stealing artists’ work. Which is a fair criticism. We don’t have any control or oversight on the data that power these tools. But seeing as how easily GPT-4 can spit out the code for basic game mechanics or, as I shared above, literally turn an image into code, I do wonder if generative AI could also be the missing link between creativity and the demands of producing newer forms of media. There’s the old saying every man has a book in them. But I suspect we’re about to find out how many people have a sorta-not-great, but decently-coded video game in them.

An AI Can Dress AI Models Now

Danny Postma is an AI developer who runs a service called Deep Agency. He’s one of the people trying to use deepfakes to replace real models. Once again, I have to ask why are we replacing real models in the first place, but let’s ignore that for a second.

Tweets of Postma’s went viral a few weeks ago when he was asked if his deepfake models could be put into different outfits for different brands and he admitted they couldn’t. At least without Photoshop. Well, now they sort of can.

Postma clarified in the replies that this isn’t using GPT-4. He also acknowledged some anomalies. From what I can see, one arm is noticeably longer than the other. One leg is bigger as well. The face looks less like a face the more you look at it. And the hair sort of just disappears at a certain point. But, hey, the hands look ok!

Instagram NFTs Will No Longer Be Supported By Instagram

OK, quick AI break. Meta announced this week that it is sunsetting support for NFTs on Facebook and Instagram. Meta launched support for NFTs or, “digital collectibles,” as they called them, last summer. I have not been able to get any data on how many people were actually using this feature or bought NFTs through Meta.

I was wondering if Meta abandoning the project would result in a whole bunch of dead NFTs, but best as I can tell, Meta didn’t have its own blockchain or its own wallet for their collectibles. It wast just pulling public data from chains like Ethereum and Polygon and syncing with third-party wallets. Still, I think if you are someone who cares about NFTs, the fact Meta’s collectibles program didn’t even make it a year should give you some pause about big companies trying to move into this space.

Someone Fully Rotoscoped Themselves With Midjourney

This is a slightly different AI rotoscoping process to what Corridor Crew did last month with their fake anime project. Instead, Midjourney created a series of images that defined the aesthetic and then a video AI model called Runway was used to mask footage from a phone’s camera in that aesthetic.

This is all well and good and, honestly, pretty cool. But, man, ever since I read a tweet pointing out that all AI art is orange and blue (likely because that’s what all movie posters are now) I just can’t unsee it.

Alright, that’s the end of the AI stuff today! Moving on to a different kind of “adversarial network”… Reddit!

It’s Getting Weird On Fandom Subreddits

A reader named Nathan tipped me off to this. Users from the r/BatmanArkham subreddit are going into other subreddits and posting really dumb stuff and calling it an “invasion”. You’ll know if a post is related to r/BatmanArkham if it’s a bad question about a specific fandom that ends with “is he stupid?”

For instance, before I knew this was going on, I saw a post on the subreddit for Zelda: The Breath Of The Wild that was titled, “Why can’t Zelda kill Ganondorf? Is he stupid?” And I was like “man, Redditors sure are dumb.” But, apparently, they’re being dumb on purpose. Here’s a big list of all the subreddits that have been “invaded” so far.

Meanwhile, the Marvel Spoilers subreddit is no more. After specific dialogue from the new Ant-Man movie was leaked to the community, Marvel is threatening legal action and the moderators of the subreddit got spooked. On one hand, I think professional leakers and spoiler communities make franchise entertainment less fun. But, on the other hand, I read all of it. So, you know, I’m sort of torn about this.

A Real Good Video

(Tumblr mirror for folks in non-TikTok regions.)

Some Stray Links

P.S. here’s a good Twitter reply.

***Any typos in this email are on purpose actually***

Join the conversation

or to participate.