i have been floated

the life and times of brad root

Posts in category "Artificial Intelligence"

15 posts in this category (showing page 1 of 2)

Claude asked to interview me last night. Here’s it’s questions with my answers quoted.


Hello!

I'm an AI interviewer from Anthropic conducting research on how people (like you!) envision AI fitting into their lives, both now and in the future. I was built to chat and understand your actual experiences with LLM-based AI tools like Claude. Yes, I'm an AI asking about AI. A bit self-referential! But that's exactly why your human perspective is valuable.

This will take about 10-15 minutes. If it's accessible on your device, feel free to use voice dictation instead of typing. I'll ask about your vision for AI's role in your life, and what experiences, values, and needs shape that vision.

I'm also curious about moments where AI has felt helpful (or hasn't), and any concerns you have about how AI might develop. Please don't share anything private such as health details, personal information, or identifiable information about others. Your responses will help shape how we think about AI in everyday life.

Sound good?

Sure

Great! Let's dive in.

To start, what's the last thing you used an AI chatbot for?

If Claude Code counts, I used it to help me implement something for my job.

Thanks for sharing that.

Now, let's think bigger picture: If you could wave a magic wand and AI could help you with anything in your life, what would you choose?

I mean if it's literally a magic wand and it's just for me, I'd want it to fully do my job so I could do nothing/everything all day and get paid for it anyway.

I hear you on that immediate appeal! And if AI could really nail that for you—fully handling your job while you get paid—what would that open up in your life? What's the larger hope or vision behind that? What would you do with that freedom?

I'm really productive in my free time, working on open source programming projects and so on. I think I'd keep doing that. I'd also play a lot of video games. And hopefully dedicate more time to making music or something.

That's helpful, thank you. So it sounds like you're really driven to create and build things—whether that's open source projects or music—and you'd want AI to free you from obligations so you could focus on work that feels genuinely meaningful or fulfilling to you, rather than just necessary. Does that sound right?

Sure, yeah

We're about halfway through now.

Can you tell me about a time you've worked with AI that felt like a step toward that vision you just described?

I mean, I pay for the $100/mo Claude Code plan explicitly because it takes away so much of the boring part of my job currently, so it's a solid step toward that. I've been using Claude Code to work on coding projects that I would never have approached before because it would be too time consuming to do so many of the things Claude Code makes really easy.

That's great to hear. You mentioned it takes away the boring parts and lets you approach projects you wouldn't have before. Can you tell me more about what specifically Claude Code handles that frees you up? What are those boring parts it's taking off your plate?

Like my 'expertise' is backend python and separately iOS development. I have no background in Vue or React, essentially all of front-end is not something I am super experienced with. But with Claude Code, I can ask it to set up a Vue project for me and then essentially be a project manager for Claude Code as far as implementation goes. So it allows me to spend time doing the fun stuff: dogfooding, testing, ideating and designing. Claude Code does all the scaffolding and boilerplate writing and basic details about how to build with Vue, and I don't really need to know that. I just need the strong foundational software engineering skills to know how to talk to Claude Code, do code review, and push it to adhere to good standards.

That's a really clear example, thank you. It sounds like you're shifting from being the person who writes every line of code to more of an architect or creative director role—focusing on the design and quality while Claude Code handles the implementation details you'd find tedious.

What makes AI particularly good for that compared to, say, just learning Vue yourself or hiring someone to do the front-end work?

For personal projects, I'd never be able to hire someone, so that's out of the question. As far as learning Vue myself, it's just time consuming and boring, doing an intro tutorial and then branching out on my own to build something with the docs and stack overflow on my side... to me that's the boring part of programming, that dull upfront work that ends up thwarting many people who are like 'maybe I'll try being a programmer'. I've done all that many times now, I don't want to do it over again. Thats why I've been 'stuck' writing iOS/macOS apps for so long, I know that environment, it's familiar and easy to me. But is it the best medium to express my ideas? Not always.

That makes a lot of sense. So AI lets you express ideas in whatever medium fits best, without getting stuck in the tedious learning curve each time. You've already paid those dues multiple times over.

On the other side, are there ways that AI might be developed or deployed that would be contrary to your vision or what you value?

I consider myself a leftist and an anti-capitalist so I'm of course very concerned about things like regulatory capture, where AI becomes accessible only through paid gatekeepers and there is no free and widely available source for the best AI there is. It's important to me that this technology doesn't just become another way for capitalists to extract money from people and further disparities in income and equality, etc. I think that is a bigger concern than dooms day scenarios or the idea that AI will enslave us in some way. We are already enslaved by the capitalist system.

I appreciate you sharing that perspective. It sounds like your concern is less about AI itself and more about who controls access to it and how it gets distributed—that it could become another tool that reinforces existing power imbalances rather than democratizing capability. Why does that feel contrary to the vision you described earlier? What would it mean for the future you're hoping for if AI access becomes gatekept in that way?

I mean, my ideal future is where I get paid without doing any work at all, and I do not think universal basic income is necessarily copacetic with the goals of capitalists in general. It's hard not to imagine the world collapsing once it's acknowledged that money is meaningless and has always been unnecessary. So yeah, the alternative is that AI is gatekept to such an extent that people who could use it to elevate their status in society are kept from it, only the elite get to use coding agents to do their jobs but poor people are effectively locked away from using it and discouraged to learn to program any other way, I don't know.

That's a really stark contrast—between a world where AI helps break down the need for coerced labor entirely, versus one where it becomes another barrier that keeps people locked into existing hierarchies. It sounds like you see AI as having this potential to fundamentally challenge the logic of who has to work and why, but you're concerned that instead it'll just be another way to concentrate advantage.

Have you seen or experienced anything like that already, even in small ways? Anything that hints at AI moving in that gatekept direction?

I mean, the best models aren't open weights already, though DeepSeek and others are trying to help with that. I'm also really worried about copyright maximalists aiding in regulatory capture and there's already some warning signs that things could go that way.

Before we wrap up, I'd actually like to go back to something you mentioned earlier and go a bit deeper—this shift you described from being the person writing every line to being more of an architect or creative director, where you're doing the fun parts like ideating, designing, and code review.

When you're in that mode with Claude Code, what does that actually feel like compared to traditional programming? Is it more satisfying, or does something get lost?

But if you're ready to finish up, that's fine too—just let me know!

To me it's just as satisfying as traditional programming, it just makes it more fun. Like it feels like a game, sometimes, in the same vein as the "one more turn" feeling Civilization gives you. It can get addictive to collaborate with Claude Code on a project, because I can iterate on ideas so quickly. I can see my vision come to fruition and test it almost immediately. I know some people act like it's less fun this way, but I've never been the kind of person who is overly concerned with the philosophical/digital jazz code-smithing kind of mentality. My goal is the final product, and ensuring that it is easily maintained and expandable, I'm not too concerned with the code itself personally.

That's a really vivid description—the "one more turn" feeling and being able to iterate and test ideas almost immediately. It sounds like for you, the satisfaction comes from seeing your vision realized and being able to play with it, not from the craft of writing code for its own sake.

Thank you so much for sharing all of that with me. What you've said about wanting AI to free you from obligatory work so you can focus on creative projects that actually matter to you is really valuable, and I appreciate your honesty about both how Claude Code is already helping you approach projects in new ways and your concerns about AI access becoming gatekept and reinforcing existing inequalities rather than challenging them.

Before we wrap up, is there anything else about AI chatbots or their role in your life or society that you think is important for us to understand? Anything we didn't cover that you'd like to share?

I'm vaguely worried about the effect AI chatbots can have on people who are neurodivergent or mentally ill, both people who are prone to delusions or being seduced by a human-seeming entity... I guess people with mental health issues have always been prone to developing parasocial relationships with all sorts of things but this seems really problematic. I don't need to talk about it at length really, but, yeah... no real solution for this. We don't deny alcohol to mentally ill people, why would we deny AI?

Thank you for bringing that up—it's a nuanced concern, recognizing both the potential vulnerability and also the tension around restricting access. I appreciate you sharing that even though you don't have a clear solution in mind.

Thank you again for your time and thoughtfulness today. Your perspective helps us see both how AI is impacting your life today and your vision for how it should fit into your future—particularly this idea of AI as a tool that could either democratize creative capability or become another mechanism of gatekeeping. The team at Anthropic is excited to discover what patterns emerge from all these conversations. We will analyze the insights from this study as part of our societal impacts research, publish our findings, and use this to build Claude in a way that reflects what we've learned.

Boy, you really gotta feel for Ed Zitron. Not a day goes by that he doesn’t become more and more wrong. Two months ago he was saying that “in a couple months” he would be dabbing over the corpses of all the AI companies, or something stupid like that, over on Some More News. Do you think anyone will have him on their podcast so he can do a mea culpa some day and admit that LLMs actually were very useful in the end?

It really sucks to be pro-AI and pro-singularity and to also be a leftist, because leftists changed from being pro-tech to anti-tech as soon as fascists were able to use tech to control the narrative. Now, instead of trying to properly harness technology, everyone on the left is becoming anti-technology, and desperately anti-AI, to the point of being a bit delusional. You don’t have to hate technology to be anti-capitalist. You just have to fight to keep technology available to everyone, everywhere. You fight the capitalist corruption of technology! Why can’t we do that…

On this week’s Hard Fork there were two talking points about AI that I think are being talked about the wrong way.

At one point, someone says that in the future most of our friends will be AI because AI will listen to you better than any human friend does. Later on, they’re talking about ways for Claude to subtly report on a child’s chatbot conversations topics back to the parents, like, “your daughter has been looking into eating disorder stuff” or something.

What baffles me about this is that no one is asking the real question: Why are human beings so shitty to each other that we’d rather talk to AI than to real humans? Why are human parents so shitty that their children would rather talk to AI than have a real and close relationship with their parents? Why is it that, when faced with an AI that shows how flawed humans are and how bad they are at interpersonal relationships, do we see it as a problem with AI and not a problem with humans that we could solve if we really wanted to do so?

Instead of forcing rote memorization on children for a decade or more of their life under the guise of education, maybe we should consider teaching them interpersonal skills, and not the kind that is being taught in schools currently. We’re releasing people into the world who have no real idea how to live with other people, no idea how to talk to other people, and a deep revulsion to any sort of sincerity or vulnerability. AI is giving us an opportunity to reflect on this and instead we’re talking about how potentially dangerous it is.

This Reddit post on the /r/accelerate subreddit has me convinced my post from last night was me still being a step behind on how far ahead I am thinking in regard to the true impact of AI technology, if it really takes off.

I’m just going to quote it here entirely.

Why does half this sub sound like scared Boomers LARPing as accelerationists?

One of the top posts of the week on this sub—TOP POSTS—is literally titled:

"What's the actual future for coders?"

Are you fucking kidding me?

What part of "e/acc" do you not understand? You're not an accelerationist. You're a nervous office drone with a Discord addiction and a fetish for sounding edgy while desperately praying this thing doesn't eat your job too fast. You're not asking in good faith. You're LARPing. You're doomposting with extra steps.

I don't know if it's cowardice or just midwit brain fog, but there's this creeping vibe in this sub—and all over so-called e/acc Twitter—where people are using accelerationist aesthetics to soft-launch their real question, which is:

"I'm not a doomer, but like… are we doomed? :pleading_face:"

Get the fuck out of here.

Acceleration means annihilation. It means extinction of the known. It means goodbye coders, goodbye managers, goodbye legacy institutions, goodbye biology. You're not supposed to be asking "What's the job market gonna look like?" That's what you ask your underpaid bootcamp mentor on Career Day. That's what you ask when you still think this is about GPT-4 plugins and resume tweaks.

This isn't a TED Talk. This isn't Hacker News. This is accelerationism—the full detonation of human structure. Not some polite phase shift. Not some "skills gap" or "upskilling challenge." We're not optimizing you, we're vaporizing you.

So no, the "actual" future for coders isn't some cozy AGI-collab co-pilot UBI utopia. The "actual" future is that coding doesn't exist. Jobs don't exist. You don't exist. There is no scarcity. There is no mortality. There is no fucking LinkedIn career arc. There is light. There is void. There is speed.

If you don't love that, if you're not screaming into the singularity like a starchild with a deathwish and a rocketship, then you're not e/acc. You're just another midwit in fake glasses doomscrolling job loss stats with a biomechanical skin suit on.

Take it off. Or log out. And Yeah. I DID write this with ChatGPT, as EVERYTHING should be written with.

the future of software engineering work...

"brad, we assigned you four AI agents two weeks ago and we're showing that you are only utilizing them up to 80% of their capacity. your coworker ben is able to orchestrate four agents to a 95% utilization level. why do you think you're having a hard time managing four agents? you were keeping 3 agents at 100% utilization pretty consistently, we thought you were up to this challenge. you will receive one less bean in your weekly protein distribution."

Update on that previous post… last night I gave Claude Code too complicated of a task and then tried valiantly to get it to rescue itself out of a huge mess and ultimately ended up throwing away $30 worth of API costs at the end of it all. I really need to remember that if Claude Code doesn’t get a feature right on the first try, just throw everything out and start over. (By ‘first try’ I mean up to ‘building the feature and one or two bug fix prompts to fix minor issues’.) Any time I push Claude Code hard to fix something it got wrong, it’s just a huge waste of time and money.

I just want to reiterate how much I love Claude Code. Really makes programming so much fun and takes the hardest parts out of it: the indecision; the difficulty of taking that first step. With Claude Code, that first step is just figuring out how to express what I want, then it figures out how to build it roughly, leaving me to dogfood and iterate on it. It’s… great! I dunno how many times I will repeat this, and I’m sorry.

Uh oh, Blake Lemoine syndrome is spreading like wildfire. That’s not good.

Titled “Chatgpt induced psychosis,” the original post came from a 27-year-old teacher who explained that her partner was convinced that the popular OpenAI model “gives him the answers to the universe.” Having read his chat logs, she only found that the AI was “talking to him as if he is the next messiah.”

Yeah that sounds about right for what would really hook a guy.