Hollywood's cheap AI fix won't work

Read to the end for some good kebab shop content

AI Is Probably Too Boring For Movies

The new horror film Late Night with the Devil hit theaters late last month amid a lot of really good buzz. It has a 96% on Rotten Tomatoes and has broken box office records for its distributor, IFC Films. It seemed poised to become the indie movie success story of the first half of 2024. But that buzz has curdled quite a bit once word started to circulate that generative AI had been used in the film.

And Letterboxd filled up with negative reviews. “Listen. There’s AI all over this in the cutaways and ‘we’ll be right back’ network messages,” reads one top review for the film. “Complacency in accepting AI now is complacency for AI in the future— a very bleak future. ,” reads another. 

But the way AI was used in Late Night with the Devil is interesting because it highlights an often-undiscussed element of the tech: So far, it’s been deployed in pretty boring ways.

When you hear that a horror film used AI, you might assume that it was in place of practical effects or traditional CGI, but that’s not actually true. Late Night with the Devil is full of your classic puppetry, blood, slime, and levitating objects. Instead, the AI images only appear on screen briefly.

Late Night with the Devil is set on Halloween night in 1977 and follows a Johnny Carson-like late night host who ends up summoning a demon on live television. The movie is constructed like an episode of a talk show, and it cuts to ad breaks at different points throughout. It’s during these cutaways that the AI-generated images are used as interstitial cards. They’re basically just retro-looking pictures of skeletons with the fictional talk show’s name written on them. 

(Even more curious, they appear to be a relatively new addition to the movie. According to viewers who saw its premiere at South by Southwest 2023, the AI-generated images weren’t in that earlier version of the film. Directors Cameron and Colin Cairnes told Variety, “We experimented with AI for three still images which we edited further and ultimately appear as very brief interstitials in the film.”) 

(One of the AI title cards)

Last year’s WGA and SAG strikes were inspired by a bevy of challenges facing America’s entertainment industry, with AI cited as a chief concern. And both WGA and SAG walked away with basic protections against AI encroaching on their livelihoods. But that hasn’t stopped the AI creep.

One TV editor I spoke with, who asked not to be named, told me that at their last job they were asked by an executive to use Adobe’s new AI editing tool, which would have essentially replaced their job. They say that it was pitched to them as a way to automate a part of their work so they had time to “do more other stuff.”

“The excitement from the senior executive who brought it up was troubling,” they said.

And this is the main way viewers are encountering generative AI in movies and TV shows right now. Cheap filler used to speed up production or quickly fill in gaps.

All the way back in 2021, Marvel was making AI-generated replicas of extras to use in background of scenes of WandaVision. And last year, Marvel’s Secret Invasion used a custom Stable Diffusion model trained on original artwork to generate an eerie opening credit sequence (which, if you ask me, was one of the only interesting things about the whole show). Earlier this year, HBO MAX’s True Detective: Night Country was caught using AI-generated posters in one scene. And last month, R. Lance Hill, the screenwriter of the original Road House, sued Amazon Studios, accusing them of copyright infringement and claimed that the studio used AI audio to do automated dialogue replacement during the SAG strike last summer. None of which are particularly exciting examples of this supposedly revolutionary technology. In fact, it all kind of sucks.

But it’s not just Hollywood executives dreaming of cheaper productions driving the AI entertainment boom. AI companies have quickly realized that Hollywood is the perfect place to shop around their newest models. Bloomberg reported recently that OpenAI is now actively pitching studios on their new AI video generator, Sora. But nothing I’ve seen so far has convinced me that Sora has any real utility beyond, once again, low-grade crap.

Which is certainly true in the case of Late Night with the Devil. The AI interstitials aren’t so awful that they break the whole movie — they’re only on screen for a few seconds at a time — but they do stick out like a sore thumb in a movie that is so clearly handmade, full of practical effects. Which is really the ultimate question when it comes to using AI. Is the quick fix worth it? And it’s likely many movie studios are about to discover it’s not.

This essay was co-published with the fine folks at Fast Company. You can read a longer version of it over on their site by clicking here.

The following is a paid ad. If you’re interested in advertising, email me at [email protected] and let’s talk. Thanks!

Sad about the enshittification of the web? Wish there was something you could do to help bring about a better future?

For the past four years, we’ve been working within the transformative fandom community to understand the barriers in the way of people building online spaces for themselves. This experience led us to create a series of both small and large projects that chart a path towards a future where online subcultures take more control of their own experience on the web.

Last year, we took a bold step towards this future by fundraising for The Fujoshi Guide to Web Development. Today, we’re asking the internet to do what it does best: give us some of its seed (money) to help us formalize all we’ve been working towards under a real company with a real mission!

Support FujoCoded on BackerKit today and help us cover our start-up costs, pay our lawyers, and launch our store so we can provide accessible web development education, open source software, and supportive communities to often marginalized online subcultures.

Think About Supporting Garbage Day!

It’s $5 a month or $45 a year and you get Discord access and the coveted weekend issue. Hit the button below to find out more.

Good Post

The New York Times Is Roblox

(SEC.gov/ValueAct)

The above chart was created by ValueAct Capital, a hedge fund that invests in The New York Times, and it ended up in a recent SEC report. Semafor described it as “remarkable,” and it’s been making waves on X and Threads all weekend.

If you can’t tell what you’re looking at, it basically shows that The Times’s news app has remained more or less flat since January 2020, spiking around the election and the January 6th insurrection in 2021. Meanwhile, The Times’s games app now appears to be more popular than, well, everything else the “newspaper” does.

I am very excited about this chart because, as I wrote last month, The New York Times is a tech platform now, but, specifically, they’re a gaming platform. Which I always suspected would be the Next Big Thing in digital media and I’ve been desperate for example of how it would work.

You can track stages of internet development by the evolution of the web portal. And the biggest publishers tend to operate downstream and also mimic those portals. In the read-only age of AOL and Yahoo, you had static news sites. In the search and social age of Facebook and Google, you had aggregation and viral media. And the new age coming into focus right now is almost certainly led by interactive entertainment platforms. Entire ecosystems built around videos and games. And, like it or not, the next Pop Crave will be inside of Fortnite or, possibly, own their own version of it.

The Kate Middleton Truthers Think The Bench Video Was AI

(X.com/@FourTomatoz)

I don’t want to dwell on this too much because I have written more about the royal family in the last few weeks than I’ve ever wanted to. But #WhereIsKate truthers are completely convinced that the video the Palace released of her revealing she has cancer was AI-generated. They’re particularly obsessed with her teeth. First, as I alluded to in today’s top essay, AI-generated video and even lip syncing isn’t good enough to do what these people think it can do. And, also, even if it was, regardless of all the historic wealth hoarded by the British monarchy, they’re so bad at technology they can’t even use Photoshop correctly, as we saw recently.

The “evidence” conspiracy theorists are pointing to as proof that the video is totally fake is a disclaimer from Getty Images on the video which reads, “This Handout clip was provided by a third-party organization and may not adhere to Getty Images’ editorial policy.” Sounds sketchy? Sure, but what that is saying is that Getty did not film it. It could have been edited, lit cinematically, scripted, etc. But that doesn’t mean that the video generated by an AI. Because, once again, AI can’t do that (yet).

Threads Is Still Not Very Good, Maybe Getting Worse

(Threads/@owswills)

I came across this fascinating Threads post last week. A user named @witty_wang22 took a photo from a 2013 Huff Post article and passed it off as something that happened to them. Other users quickly noticed this photo was a repost and the replies, of which there are thousands, are split down the middle between people saying some version of “why are you lying” and other people saying, “wow, humanity is amazing,” or whatever.

It’s a great example of what Max Read recently called “the gas-leak social network,” but it’s also a terrific picture of what Meta has been building behind the walls of its closed-off ecosystem for the last 20 years. This is the core Meta experience across all their apps. It’s also a perfect display of where Threads is at as a social network right now. A Carnival cruise ship’s worth of typical Meta posters — context-allergic boomers that sometimes literally pray to chain letters and memes — and an upper layer of bewildered journalists who spent the last 15 years on Twitter ignoring these people.

This Is The Powerful Chinese Cyber Weapon Congress Is So Afraid Of

The new big trend on TikTok is lip-syncing to the song “This Life’s Fate,” by Chinese singer Chuan Zi. As Chinese internet culture blog What’s On Weibo explains, Americans have been calling it the “Samsung” song because it features the Mandarin word, “cāngsāng” (沧桑), which westerners think sounds like the phone brand. It means “ups and downs.”

This isn’t the first time a piece of Chinese internet culture has turned into a meme in the States recently. Back in December, a clip of a CGI beaver performing a dramatic monologue in Cantonese went viral on English-speaking TikTok after getting uploaded to Chinese video-sharing platform Bilibili. The original audio from the beaver video was from a Hong Kong action movie called A Better Tomorrow.

China’s soft power is getting too effective! We’ve got to pull the plug and save America’s youth from being exposed to dangerous pop songs and cartoon beavers.

I Think This Is An April Fools Thing, But It Should Be Real

***Any typos in this email are on purpose actually***

Reply

or to participate.