i have been floated

the life and times of brad root

Posts in category "Artificial Intelligence"

14 posts in this category (showing page 1 of 2)

Boy, you really gotta feel for Ed Zitron. Not a day goes by that he doesn’t become more and more wrong. Two months ago he was saying that “in a couple months” he would be dabbing over the corpses of all the AI companies, or something stupid like that, over on Some More News. Do you think anyone will have him on their podcast so he can do a mea culpa some day and admit that LLMs actually were very useful in the end?

It really sucks to be pro-AI and pro-singularity and to also be a leftist, because leftists changed from being pro-tech to anti-tech as soon as fascists were able to use tech to control the narrative. Now, instead of trying to properly harness technology, everyone on the left is becoming anti-technology, and desperately anti-AI, to the point of being a bit delusional. You don’t have to hate technology to be anti-capitalist. You just have to fight to keep technology available to everyone, everywhere. You fight the capitalist corruption of technology! Why can’t we do that…

On this week’s Hard Fork there were two talking points about AI that I think are being talked about the wrong way.

At one point, someone says that in the future most of our friends will be AI because AI will listen to you better than any human friend does. Later on, they’re talking about ways for Claude to subtly report on a child’s chatbot conversations topics back to the parents, like, “your daughter has been looking into eating disorder stuff” or something.

What baffles me about this is that no one is asking the real question: Why are human beings so shitty to each other that we’d rather talk to AI than to real humans? Why are human parents so shitty that their children would rather talk to AI than have a real and close relationship with their parents? Why is it that, when faced with an AI that shows how flawed humans are and how bad they are at interpersonal relationships, do we see it as a problem with AI and not a problem with humans that we could solve if we really wanted to do so?

Instead of forcing rote memorization on children for a decade or more of their life under the guise of education, maybe we should consider teaching them interpersonal skills, and not the kind that is being taught in schools currently. We’re releasing people into the world who have no real idea how to live with other people, no idea how to talk to other people, and a deep revulsion to any sort of sincerity or vulnerability. AI is giving us an opportunity to reflect on this and instead we’re talking about how potentially dangerous it is.

This Reddit post on the /r/accelerate subreddit has me convinced my post from last night was me still being a step behind on how far ahead I am thinking in regard to the true impact of AI technology, if it really takes off.

I’m just going to quote it here entirely.

Why does half this sub sound like scared Boomers LARPing as accelerationists?

One of the top posts of the week on this sub—TOP POSTS—is literally titled:

"What's the actual future for coders?"

Are you fucking kidding me?

What part of "e/acc" do you not understand? You're not an accelerationist. You're a nervous office drone with a Discord addiction and a fetish for sounding edgy while desperately praying this thing doesn't eat your job too fast. You're not asking in good faith. You're LARPing. You're doomposting with extra steps.

I don't know if it's cowardice or just midwit brain fog, but there's this creeping vibe in this sub—and all over so-called e/acc Twitter—where people are using accelerationist aesthetics to soft-launch their real question, which is:

"I'm not a doomer, but like… are we doomed? :pleading_face:"

Get the fuck out of here.

Acceleration means annihilation. It means extinction of the known. It means goodbye coders, goodbye managers, goodbye legacy institutions, goodbye biology. You're not supposed to be asking "What's the job market gonna look like?" That's what you ask your underpaid bootcamp mentor on Career Day. That's what you ask when you still think this is about GPT-4 plugins and resume tweaks.

This isn't a TED Talk. This isn't Hacker News. This is accelerationism—the full detonation of human structure. Not some polite phase shift. Not some "skills gap" or "upskilling challenge." We're not optimizing you, we're vaporizing you.

So no, the "actual" future for coders isn't some cozy AGI-collab co-pilot UBI utopia. The "actual" future is that coding doesn't exist. Jobs don't exist. You don't exist. There is no scarcity. There is no mortality. There is no fucking LinkedIn career arc. There is light. There is void. There is speed.

If you don't love that, if you're not screaming into the singularity like a starchild with a deathwish and a rocketship, then you're not e/acc. You're just another midwit in fake glasses doomscrolling job loss stats with a biomechanical skin suit on.

Take it off. Or log out. And Yeah. I DID write this with ChatGPT, as EVERYTHING should be written with.

the future of software engineering work...

"brad, we assigned you four AI agents two weeks ago and we're showing that you are only utilizing them up to 80% of their capacity. your coworker ben is able to orchestrate four agents to a 95% utilization level. why do you think you're having a hard time managing four agents? you were keeping 3 agents at 100% utilization pretty consistently, we thought you were up to this challenge. you will receive one less bean in your weekly protein distribution."

Update on that previous post… last night I gave Claude Code too complicated of a task and then tried valiantly to get it to rescue itself out of a huge mess and ultimately ended up throwing away $30 worth of API costs at the end of it all. I really need to remember that if Claude Code doesn’t get a feature right on the first try, just throw everything out and start over. (By ‘first try’ I mean up to ‘building the feature and one or two bug fix prompts to fix minor issues’.) Any time I push Claude Code hard to fix something it got wrong, it’s just a huge waste of time and money.

I just want to reiterate how much I love Claude Code. Really makes programming so much fun and takes the hardest parts out of it: the indecision; the difficulty of taking that first step. With Claude Code, that first step is just figuring out how to express what I want, then it figures out how to build it roughly, leaving me to dogfood and iterate on it. It’s… great! I dunno how many times I will repeat this, and I’m sorry.

Uh oh, Blake Lemoine syndrome is spreading like wildfire. That’s not good.

Titled “Chatgpt induced psychosis,” the original post came from a 27-year-old teacher who explained that her partner was convinced that the popular OpenAI model “gives him the answers to the universe.” Having read his chat logs, she only found that the AI was “talking to him as if he is the next messiah.”

Yeah that sounds about right for what would really hook a guy.