Posted on :: 1475 Words :: Tags: , ,

There's a common idea amongst a lot of my friends, peers, and colleagues that "AI" is and can only ever be bad. Given a lot of the bullshit that's been peddled by former NFT bros around AI, this is a reasonable conclusion to come to. However, I fear that we are losing track of some nuance in the conversation and throwing the baby out with the bathwater.

I've been using LLMs a decent amount over the past few weeks for a number of applications, and honestly, it's been extraordinarily useful in very practical ways. Although I still have some ethical concerns that I'll discuss, I feel pretty optimistic about it as a technology, and I'd like to share my experiences in the hopes that I can hit a non-centrist middle ground between "it's always evil" and "it's always good."

One area it's been helpful is for dictation. For years I have struggled with carpal tunnel and RSI. I've spent way too much money on special keyboards that should help alleviate it. They have helped but they have not eliminated it. For the past week or so I have been using LLM-powered dictation software, and it is changing my life for the better. There's other, non-AI dictation software out there, but from what I can tell none of it just works out of the box as well as the LLM-powered stuff. At first it made me feel like an early aughts douche talking to my computer, but it is actually a godsend for my wrists, and I hate to admit that.

I've also been occasionally asking an LLM questions in the more "traditional" chat-bot way. Whereas usually before I would've scoured a bunch of blogs on X software vs Y software, gone through Reddit reviews and product websites to try to figure out what would work best for me, now I can turn that into a single query and just have the LLM do that work for me. Is it going to give me "the right answer"? Probably not. Does that ultimately matter? No! Because the answer I would've arrived at doing it myself also might not have been the right answer because I'm susceptible to marketing bullshit as much as the next person, and I might just go for the company with the better marketing. When it's all put together in a little summary, it's a lot easier to see through the marketing bullshit. And because I get sources, I can double-check that whatever I end up choosing seems like a good fit, and not end up going down rabbit holes like I am so prone to do.

Most significantly I have been using LLMs a lot for coding. When Github Copilot came out with its LLM-based autocomplete, I gave it a whirl and was thoroughly unimpressed. It was, at best, marginally better than the fuzzy finder autocomplete we'd had for years. Then, it got a little bit better. I started using it occasionally, but most of the time I ignored it. Then I tried out Windsurf, an LLM-powered VSCode fork. I'd been wanting to rebuild my website for several years, and getting started has been an impossibility for me with ADHD and all the stuff going on in my life. Windsurf got the skeleton of a website rewrite ready in 30 minutes (with my help, obviously, but it did the hardest part for me (getting started)). I was very impressed; that's what convinced me to really give these tools a shot.

Then, I tried Claude Code, and for better and worse, I haven't looked back. That was a few weeks ago, and I've been using it in basically all of my coding projects. It's allowed me to take a step back from the code and view it from a higher level of abstraction, which is often something that I struggle to do when I'm deep in the details. Going back and forth between "deep in the details" and "higher-level overview" is probably the most difficult part of software development. Using Claude Code, which will just write large swaths of code for you, takes away a lot of that difficulty.

It's not perfect; it makes a lot of mistakes, there are a lot of instances where I have to hold its hand, and I read all of its output and test it manually. But I was doing that anyways, it's still saving time, it's saving cognitive load, and it lets me do a bit more multi-tasking.


The biggest surprise I've noticed while using LLms is how my own thinking has evolved. I first realized this as I read through the "thought" processes of "thinking" language models. I'd notice similar patterns and trains of thought in my own thinking, and it seemed to give me this interesting new perspective on my own thoughts. As a meditator, I find this kind of distance and perspective quite valuable. Maybe this is me falling for the groupthink, but I think, just maybe, my thinking is improving.

I talked already about how I'm able to think about code from a higher level of abstraction, but also with dictation, I'm able to think about things in a slightly different way. I can look around the room while I'm talking, just speaking my thoughts as they're coming to me. This isn't always better, but it is a different way of thinking. This sort of multi-modal thinking, writing by hand and speaking, feels like I'm able to see my thoughts from more angles and more clearly. Then, at the end of the day, I've found I'm able to communicate my thoughts more clearly.


There's a lot of places that I still refuse to use these large models. Any visual art or written prose where it's generating an end product that I had no hand in (beyond some vague idea of what I wanted it to look like), I find to be morally dubious at best. Art is an inherently human endeavor; it's about humans communicating with each other and laboring to create something beautiful for other humans to appreciate. Making a machine do all the labor completely defeats the purpose.

Now, there is art out there that didn't require a lot of labor, but it's still a human inherently putting in the work. There's just no point in art if it's not made by a person. I watch, for example, Studio Ghibli movies because I know that every one of those frames was created and decided upon by a person, with every line and every color carefully considered.

Tools are a different story though. Sometimes I'm just using a tool to make something that I can use for myself. Not every gadget in my house has to be artisanally hand-assembled by a human being. I would hope that they've gotten the process down really well for making them, but sometimes the things that I use every day were mostly made by a machine, and that's okay. I think we can think about LLMs in the same way a lot of the time.

There's also the question about what these machines were trained on. With art and writing, it's often going to be trained on copyrighted works owned by other people who did not consent to their art being used in this way. You could make arguments about fair use, but I'm not even going to touch that with a 15-foot pole. We should, flat-out, not be training our models on material which folks didn't consent to.

And then energy usage. This is an open question that I'm pretty darn concerned about. It's why I'm trying to somewhat limit my usage. People have made claims about how much energy LLMs use, but I don't think we actually know. There are a lot of things that we use that use a lot of energy. Computers, game consoles, cars, planes. That doesn't mean that we SHOULD use energy on LLMs, but I think we should weigh these things against each other when we talk about this stuff. (We also need LLM companies to be a lot more transparent about energy usage, and we need to keep demanding that.)

At the risk of hitting a cliche... when I hear people oppose the awful stuff that LLMs enable, much of the time, what they're ultimately lamenting is capitalism. If these technologies were all more open and transparent and for the benefit of everyone and not being sold to us by companies making billions of dollars, I think the conversations would probably be a lot different. I have truly come to believe that, when used with care and understanding, these technologies can benefit us. We just need to make sure we stay nuanced, remember who the real enemy is, and keep fighting the good fights.