• Garbage Day
  • Posts
  • Generative AI is just productivity software

Generative AI is just productivity software

Read to the end for a good Shrek fit

What Will AI Actually “Replace”?

One of the questions we still don’t have a good handle on when it comes to generative AI is probably the most important: What will this technology actually “replace”?

It’s uncomfortable to consider. All new technology replaces something — or at least makes something less common and more specialized. The best example I can point to is probably the production of vinyl records. It went from the industry standard, to being completely irrelevant, and then circled back around again as a specialized product that former scene kids from the Midwest with covered-up Harry Potter tattoos can give each other as gifts. The knee-jerk assumption has been that generative AI will replace, at the very least, artists and writers. And we assume that because this current round of AI can make images and text.

Also, many lightly-bearded men who pay for Twitter and identify as things like “entrepreneur,” “founder,” “product manager,” and “design director” have been very loudly cheering on the AI-assisted death of genuine creative industries. I assume it’s because the people who work in those industries are the ones that are most likely to tell them their ideas are dumb or ugly. Weirdly, though, the thing I’ve seen generative-AI do best is coding, but no one seems as excited about replacing that industry as they are Instagram influencers.

But discourse aside, we now have a slightly clearer look at where we might be heading thanks to a study out this week conducted by OpenAI and the University of Pennsylvania. You may have seen it flying around with salacious headlines saying that up to 20% of the US workforce could be replaced with an AI, but the actual findings a bit more nuanced. 

The top five professions that appear to be most at risk, according to the study, are: translators, survey researchers, creative writers, animal scientists, and people who work in public relations. The professions that are least at risk are ones that require some kind of human physicality, those who work in agriculture, athletes, mechanics, and cooks and restaurant staff.

Though, here’s the thing, most of the headlines about this study I’ve seen and, also, a bunch of the paragraphs I just wrote are not accurately describing what the study was measuring! The professions listed above were ranked by their “exposure” to AI, which the study defines as “a measure of whether access to a GPT or GPT-powered system would reduce the time required for a human to perform a specific [detailed work activity] or complete a task by at least 50 percent.”

So while I love the hustle of a good scary headline, I’d say it would have been more accurate to write, “AI will cut down people’s workloads in many industries by as much as 50%”. Which is a bit less hysterical. And also a bad headline.

But I’m also beginning to think that generative AI shouldn’t even be considered a creative tool at all and I think we may have really gotten off on the wrong foot by talking about it like one. I’m willing to go so far as to say generative AI, as it currently exists, is just productivity software. Case in point, Adobe’s Firefly.

Adobe launched the beta for their generative-AI tool Firefly this week. I haven’t used it yet, but based on demos I’ve seen, it’s basically the text prompt interface that every other GPT tool has, but with the addition of dropdown menus that allow you to customize things like style, aesthetic, aspect ratio, and lighting. And then these AI assets can be put into other Adobe apps.

And, as The Verge reported, Adobe claims Firefly was trained on a data set that was built from content that’s in the public domain, licensed specifically to train an AI, or what was in Adobe’s stock image library. The term for this kind of data set I learned recently is “vegan data,” and this might not be totally vegan, but I’d say it, at least, sounds vegetarian. lol sorry.

Adobe is smart to advertise this because, while they have a lot of people that use their suite of apps for fun, they’re also still the standard for most professional-level graphic design work. And seeing as how humans can’t copyright purely AI-generated content, the only way forward for this technology is to build it into existing creative tools so that humans can shape it into something useful.

Which I’m continually surprised isn’t being mentioned more by the “It’s so over” crowd. If you want to legally own anything you made with a GPT program, you have to alter it enough to copyright. Another thing AI evangelists are ignoring is an important conclusion from the AI and labor study: “The influence spans all wage levels, with higher-income jobs potentially facing greater exposure.”

Which really changes how this whole space looks, as far as I’m concerned. You know, have as much fun as you want with generative AI, but any real professional use will still require human input and verified data sets, and, even then, it will largely be impacting managers rather than individual creators or laborers. So gloat all you want about never needing to hire a copywriter or an illustrator again, but I’m actually a lot more confident that their specific skillsets are harder, and possibly impossible, to replace than the glorified Slack mods that are paid six-figure salaries to manage them.

Want More Garbage? Subscribe To Garbage Day!

You get a weekend issue and Discord access and everyone talks a lot about how those two things are better than the actual newsletter. Which makes me slightly nervous, but I’m choosing to not dwell on it too much on that. Hit the button below to find out more!

LOL Wait, Shoot, OK, Um, Sorry, Maybe I’m Wrong

Literally right before I was about to publish this, I read that the Writers Guild of America is currently weighing a proposal to allow the use of generative AI in writers rooms “as long as it does not affect writers’ credits or residuals,” Variety reports.

But, according to the Variety report, this would allow a studio to, say, ask a writer to touch up an AI-written script. I actually think this is a really bad and very scary idea. So I’m at a bit of an impasse.

Man, this stuff is moving so fast.

Dentist Near Me

Giving Up On Measuring Discourse

Discourse on American Twitter over the last week has been especially nasty it seems. I mean, it’s gotten so bad that people are fighting about tipping again. But it seems to be a site-wide issue. A writer at Screen Crush got dogpiled for asking why there aren’t more movies for children at the theaters right now. A comic artist had to issue a public apology for launching an extremely harmless comic. Writer Ashley Reese is currently engulfed in like three different Twitter storms. And even I, your friendly neighborhood garbage man, stepped in it this week. I waded into the “is technology bad for children” discourse that’s been raging for weeks. Someone identifying as an introvert just sent me a mean Bitmoji. So I want to say, I apologize. Everyone on Twitter is right, 11 year olds should be allowed to have as much screen time as they want. It seems to have worked out well for all of the adults yelling at me about it on Twitter. There are two things larger things here, though, I want to touch on.

I think the accuracy of online conversation as a reflective mirror for public sentiment is something that Americans really have to reckon with. I remember during one of my first trips to Japan, I was working on a story and I was curious what the Twitter reaction was to this particular topic and another journalist I was working with was dumbfounded as to why I would care. In Japan, the users of the country’s equivalent to 4chan, 2channel, were early Twitter adopters. So, by the time I was there in 2016, the site was so overrun with netto-uyoku, or right-wing ultranationalists trolls, that the site was considered unusable for gauging any sort of real public sentiment. Me asking what Twitter thought would be as if someone asked me, “what does Reddit think?” Who cares what those weird losers are saying. I would have similar reactions to similar questions while working in countries like Mexico and India, where their Twitter trending topics were mass-manipulated by their country’s own right-wing nationalists and content marketing companies (that were oftentimes working for the right-wing nationalists). American Twitter is slightly different — I think our misery extends outward across the full political spectrum in weird and confusing ways — but I do wonder at what point we give up.

Which brings me to my other point. Twitter, because it was smaller than other social networks, because it was chronological, because it was text-based, because it was home to verified news sources, because it was public and searchable, used to feel like a digital space that could be mapped. In fact, I know researchers who do just that. But we don’t tend to think that way about, say, Instagram. There are macro-level conversations on Instagram, but they don’t really make sense because the algorithmic sorting gets in the way. And so, I’m just curious, once again, when we give up trying to discern any greater meaning from the static on Twitter.

The $100 AI Experiment Made A Discord

On Monday, I wrote about how brand designer Jackson Greathouse Fall asked GPT-4 to build as profitable a business as possible with $100. The AI recommended building a site that reviews eco-friendly products.

The project is still going and Greathouse Fall tweeted an update that it’s now made about $130 in revenue. Not bad! More interestingly, it has grown into a Discord that is launching AI-based projects such as an adventure game, a site that reviews travel products, and a pet supply store.

I don’t find these ideas super exciting, but I’ve also found that I just don’t think the stuff that AI spits out is very interesting, in general. But the idea of a Discord banding together to take the advice of an AI is wild and sort of makes my brain buzz like a hot skillet. I’m both deeply uncomfortable at this idea and super fascinated by it.

TikTok’s CEO Makes A Relatable Hoodie Video

I’m not sure why I have never considered this, but I’ve never seen the CEO of TikTok before. His name is Shou Zi Chew, he’s from Singapore, and he only joined ByteDance in 2021. His Wikipedia page, excluding the references, is only 83 words long. Which is wild.

He’s currently in Washington and pushed a video addressing the platform across the whole app this morning. The top comment is, “You know something went wrong when the boss has to show up.”

As I do with everything, I’ve been waffling between taking a possible TikTok ban in the US too seriously and not seriously enough, but the fact that Chew has strapped on a hoodie and made an Adam Mosseri-style hostage vlog makes me think this is actually getting pretty real.

How To Hack Truth Social

A group of friends realized that Truth Social’s trending topics are determined by an absolutely microscopic amount of users, sometimes as few as 10. Which means they’re actually very easy to manipulate. So they decided to try and get #Desantis2024 to trend on the app.

If you click into thread you can see a breakdown of how they did it, but the TL;DR is they used TikTok. Because even a really small amount of engagement on TikTok is enough to completely overwhelm Truth Social. They said they were able to get #Desantis2024 to the number one spot on the app before it went “down” for a while and no one was able to post.

An Incredible Vibe

(Twitter mirror for folks in non-TikTok regions.)

Some Stray Links

P.S. here’s a good Shrek fit.

***Any typos in this email are on purpose actually***

Join the conversation

or to participate.