Dumb and shameful until it's not
Read to the end for how I learned that Sierra Mist has been discontinued
Alright, Yea, I’m Calling It: Web 3.0 Is Here And It’s A.I.
The term was “Web 2.0” was first coined in 1999 by technologist Darcy DiNucci, who wrote, “the first glimmerings of Web 2.0 are beginning to appear, and we are just starting to see how that embryo might develop. The Web will be understood not as screenfuls of text and graphics but as a transport mechanism, the ether through which interactivity happens.”
DiNucci’s defintion of “Web 2.0” was a “world of myriad, ubiquitous internet-connected tools,” where you could interact with others via TV sets, cell phones, and video game consoles just as easily as you could a computer. The question that has plagued both crypto bros and Mark Zuckerberg as of late is what exactly would the next era of the internet look like after the age of Web 2.0, which increasingly feels like it’s coming to some kind of end. And over the last three years, two schools of thought have developed on what is coming next. There is “Web3,” the blockchain-backed cyberlibertarian free-for-all, where internet access is predicated on using crypto wallets to buy and sell digital assets. And there is the “metaverse,” a virtual shopping mall/social network hybrid you experience in virtual reality. I suppose you could also include here Elon Musk’s much fuzzier idea of “X, the everything app that takes us to space.”
The problem is none of those ideas feel particularly interesting, nor do they really have anything in common with the original distinction between Web 1.0 and Web 2.0. The concept of Web 2.0 was in direct contrast to Web 1.0, which was the static and fixed experience of using the web solely inside a browser window at a computer terminal. And in DiNucci’s original definition, “the defining trait of Web 2.0 will be that it won’t have any characteristics at all.” That was the point. Web 2.0 would allow the internet to be experienced everywhere and anywhere with so much ease that we never even think about it anymore. So, to follow that logic, Web 3.0 must become even more ubiquitous and fluid. It won’t require buying Ethereum to play a video game, nor will it require putting on a bulky headset to use Facebook in VR.
So what does an even more ubiquitous and interactive web look like? Well, this morning, Microsoft announced a multibillion-dollar investment in OpenAI so I suspect we’ll find out soon enough. “These innovations have captured imaginations and introduced large-scale A.I. as a powerful, general-purpose technology platform that we believe will create transformative impact at the magnitude of the personal computer, the internet, mobile devices and the cloud,” the company said in a statement. If you’re looking for a start for Web 3.0, I’d say this is as official of a beginning as you’re going to get.
In my Discord server a few weeks ago, I was chatting with a couple readers who were a little frustrated that I wasn’t condemning generative-A.I. technology more thoroughly. That I was keeping a somewhat open or at least ambivalent mind about it. And I told them that the minute Stable Diffusion was released last August, I thought we had crossed a threshold of which there was no return.
If you don’t know the difference between something like DALL-E 2 and Stable Diffusion, DALL-E 2 is owned and operated by OpenAI, the A.I. company that Microsoft just reinvested in. DALL-E 2 only works with an internet connection and is somewhat moderated and has all kinds of guardrails. For instance, it can’t generate a swastika, nor can it generate Mickey Mouse. Stable Diffusion, on the other hand, is open source and can run on anything that has 8GB of virtual RAM and a decent processor. If you wanted to, you could train an instance of Stable Diffusion solely with your own original artwork or photography and turn it into a (probably pretty bad) virtual clone of yourself. It’s not hard to imagine companies like Disney beginning to get curious about creating custom A.I. datasets that could, say, generate parts or all of an episode of a show like The Simpsons.
This stuff is already moving very fast, but the fact it’s becoming open source just as quickly, to me, means we’re not going to wake up one day and find out it all just disappeared. There are some interesting cultural similarities between A.I. and the crypto boom from a few years ago — namely, that they both have found novel uses for Discord and are loved by some of the most embarrassing people to walk the earth — but I also think crypto has blinded people to what an actual change in computing can feel like when you’re in it. I don’t think we’re going to see another large-scale attempt at taking cryptocurrency mainstream for quite a while. The use case for A.I., however, is much more clear. Which is why I find it as exciting, as I do dangerous.
The way I see it, in terms of where we are in the evolution of A.I., is that we’re basically in that awkward middle ground between the launch of Facebook in 2004 and the first iPhone in 2007. There are a lot of people excited about this stuff and there is a similar amount of people who are terrified of what it could do to us. And a whole bunch more who have never used any of these tools and have no idea where to begin, but once it’s easy enough, won’t even think twice. Because it’ll be fun or good for business or, probably more likely, because it’ll eventually come by default in our devices and popular services.
But the real reason I’m comfortable saying that the “first glimmerings,” to use DiNucci’s original phrasing, of Web 3.0 are already here is because I truly don’t have a good sense of what’s coming next. The existential questions about how A.I. will interact with our lives are too huge to consider.
Now, you might say, “Ryan, A.I. is completely overhyped. Generative-A.I. art tools can’t even figure out how many fingers people have. There are all kinds of legal and ethical problems around this technology. It’s exploitative. It’s wildly insecure. We don’t even fully understand what it will do to our brains, yet. And there is a new dumb company every day hawking worthless A.I. fixes to problems no one actually needs to solve.” Well, fun fact: That was true of Web 2.0 too! In 2013, I used an app called Foursquare to check-in to a dive bar in Greenpoint every weekend via the geotargeting on my phone so I could get free tater tots. Everything on the internet is dumb and shameful until it’s not.
Do You Subscribe to Garbage Day? Why Not?
It’s only $5 a month, you get a bunch of bonus stuff, and it goes to a good cause (the continued existence of this newsletter). Hit the green button below to find out more!
Journalists Aren’t Having A Good Time On Mastodon
The move to Mastodon has not been particularly smooth for journalists. Many of them went to the platform expecting a Twitter-like experience, assuming they could hash out upsetting news stories live on the timeline. And some quickly reported having admins put their content behind trigger warnings or even just delete it outright without warning. This has also been an issue for activists who decamped to Mastodon and found it pretty inhospitable to specific kinds of political content.
Another problem is that certain journalists, especially those on the center-right, assumed their “just asking questions” approach to journalism, particularly around transgender identity, would be tolerated. In November, there was a whole bunch of drama on the Mastodon instance journa.host, which started after one journalist shared a link to a New York Times story about the supposed danger of puberty-blocking drugs. And now it’s happening again, with the mastodon.art instance announcing that it is defederating from newsie.social because newsie.social refuses to kick users off its server that share transphobic content.
My favorite take on this is from privacy engineer Aram Zucker-Scharff, who wrote, “This is why there shouldn't be journalists-only instances. Journalism is neither a real community or an identity. It's a job and if you insist on making it your identity the only central point most seem to land on is view from nowhere bullshit that breaks your critical thinking.” I would go further and say that, at least in America, a “journalist” is an utterly meaningless distinction because, thanks to the first amendment, we all are capable of doing journalism if we feel like it. But I digress.
The main issue here is that, at least when it comes to online platforms, journalists both believe that they are an identity and act like they’re a community, but also refuse to see themselves as an online subculture with its own specific and subjective point of view. Which they absolutely are. For instance, journalists need just as much custom moderation as furries, if not more. The difference is furries know they’re furries.
And the funny thing about Mastodon’s federation — the ability to link and unlink servers from each other — is that if you refuse to both acknowledge your own community’s quirks and don’t think about how those quirks intersect with the needs of other communities, you’re going to get cut off from the network real fast.
Excited To See How An A.I. Does In Court
Oh, shit, hmm. Maybe I was too quick to say we were in the midst of an A.I.-led revolution in computing. Ah jeez.
This tweet is from Joshua Browder, the CEO of DoNotPay, the “world's first robot lawyer”. Further down the thread, Browder says the case the A.I. lawyer is going to be used in is a speeding case. Also, they subpoenaed the officer that conducted the stop (why?) and the subpoena was written by the A.I.
To be clear: I think this is a dumb idea and — I’m not a lawyer — but I assume illegal in some way. And if it’s not explicitly, it almost assuredly will be soon. That said, I do think this is probably the kind of work best suited for this generation of generative-A.I. — talking to other algorithmic processes. I mean, what is is the legal system if not an algorithm that determines guilt?
Vlogging Your Layoff On TikTok
Enable 3rd party cookies or use another browser
There are a handful of TikTok users who do aesthetically pleasing “day in the life” videos about their jobs at big tech companies. And this has been causing some issues. I, personally, can’t imagine why big American tech firms would be uncomfortable with their employees filming videos inside their offices and uploading it to a platform owned by a Chinese A.I. company.
A TikTok user named @nicolesdailyvlog, however, has continued to post “day in the life” videos as she was laid off from Google. The video is embedded above and, yeah, it’s as weird and awkward as you might imagine.
A lot of these videos, as I wrote a few weeks ago, are derogatorily referred to as “adult daycare” videos. In fact, one of the top comments under a new video from @nicolesdailyvlog reads, “Based on her last video she basically ‘worked’ in an adult day care in an overpriced airplane hanger and had juice boxes. I’m shocked she got laid off.”
Now, first, people love to dunk on these videos because they’re primarily made by young women who are just simply talking about their day and the internet is giant machine that turns harassment against women into advertising revenue. But, also, most of these videos can’t actually show what these people do because of security reasons and, also, it’s sort of boring visually, so most adult daycare videos are just people eating at the company canteen and making various smoothies and lattes. But, on top of that, most of the content like this, which you can find under the #techgirlie hashtag, is a fascinating example of the limits of relatability and aspirational content. The end result is that the users doing this are sort of neither. Their lives look joyless and sad, but also deeply privileged and out of touch. And now many of those same users are continuing to vlog through their layoff, which is causing even more aesthetic dissonance.
M&Ms Has Left The Culture War
I had to check this a few times to make sure it was real and that I understood what it was actually saying. I guess M&Ms has decided that the only thing that can heal our fractured nation is Maya Rudolph, which, honestly, I’m not sure I disagree with. She’s very charismatic.
But I do think it’s funny that they say in the press release that they didn’t think “anyone would even notice” when they are only reason anyone knows anything about the lore behind their “spokescandies”. Though, that said, I hadn’t seen the purple one before, but apparently she was launched in September to celebrate “inclusivity” and is the first female peanut M&M.
The Fish That Plays Pokémon Accidentally Leaked Its Owner’s Credit Card Info On Stream
Mutekimaru is a Japanese YouTuber who created a system that lets his fish play Pokémon. Mutekimaru mapped out corners of the fish tank to buttons in the game and a camera tracks the movement of the betta fish in the tank. They beat Pokémon Sapphire in 3000 hours.
Well, last week, the fish were playing the newest Pokémon release Pokémon Scarlet. The problem is the game is very buggy and during the playthrough it glitched out and the game crashed, but the fish continued controlling the Nintendo Switch’s buttons. The fish opened up the Nintendo Store, bought a game, and, for a brief moment, flashed their owner’s credit card number on the screen. Whoops! You can watch a subtitled version of what happened, including translated tweets, in the embed above. It’s very very funny.
A Good Point About The Last Of Us
Some Stray Links
People are using AI for therapy, whether the tech is ready for it or not (I wrote this!)
P.S. here’s how I learned that Sierra Mist has been discontinued.
***Any typos in this email are on purpose actually***
Stop trying to make "Web3" happen. "Web 2.0" was a meaningless buzzword campaign, and doesn't need a sequel any more than World War 2.0 did.
And there is no AI, that's just a buzzword for "if/else trees so complex we don't even know what to expect, much less debug, this shit". The entire field of computer science is kind of crap at its job.
A friend of mine thinks the M&M PR is just the ramp up for a Super Bowl commercial and I’m inclined to agree.