Personal AI usage disclosure
Since people have strong opinions about it and in an effort to keep things honest, let me write it once here so I can link to it from relevant README files and be done with it:
✨ I use LLMs when working on my projects. ✨
If you have problems with it you have been informed. Keep reading if you would like to know more details and some of my opinions on the matter.
My history of using LLMs
I've been relatively slow to start using LLMs, since they didn't have good integrations with CLI text editors (in particular Helix) and command line usage so they did not fit my workflow (I live in tmux, running fish shell and helix). Sometime in very late 2024 I discovered Aider and that's when I started actually using LLMs for programming. Few months later I switched to Claude Code, and have been using it as a primary LLM ever since. There have been many significant improvements in how LLMs do as coding assistants during 2025, and somewhere in the second half of 2025 I was actually satisfied with their robustness and quality of what they produced.
How I use them
Just to get it out of the way: I do not vibe-code. By vibe-coding I mean largely unsupervised usage and not paying too much attention to the code AI produces. I believe vibe-coding can maybe work for very simple CRUD web apps and simple software, but at least at the moment LLMs are nowhere near producing really quality code without supervision. It seems definitely possible to build complex LLM-based agent loops and systems, that cross-check, verify, etc. to increase how far vibe-coding can be employed, but as things are right now I find it complex, wasteful and not very interesting.
The way I use LLMs is largely treating it as a very sophisticated auto-completion and refactoring engine. I give it relatively small scope tasks, I like to point it at existing code with similar patterns and design, to ensure it produces exactly what I have in mind. Even with such a methodical usage it's quite common for the LLM to produce substandard code, overcomplicate or even break some adjacent stuff. It's been getting significantly better over time, but I fully recognize coding agents are a bit spooky in how unpredictable they can be. That's why I do review code the LLM produces and spend significant amount of time post-processing, refactoring and cleaning it.
All in all, despite all the drawbacks and shortcomings of LLMs, they are still the most significant improvement for my productivity as a developer I've ever experienced. I build better, more ambitious things faster, and the quality is significantly improved. E.g. my projects now have way more tests - because it's faster to create and maintain them, and frankly - the flakiness of LLMs requires it. My code has more and better comments, examples and auxiliary stuff like that. Just the other day, I asked the LLM to scan a certain security-sensitive code for all attack vectors it can think of and write a unit test for each of them, and indeed AI found an important case that I missed. I screwed up, I wrote that code by hand. AI saved my bacon this time, repaying for all the dozen times when it did something really dumb that I had to tell it to correct.
Political concerns
I know. The electricity and water usage, the ingestion of GPL-licensed code, the jobs that will be replaced, social repercussions, economic inequality. Yeah, yeah, I hear ya.
I just don't think hiding your head in the sand and just refusing to use LLMs or even any code that was modified by an LLM is going to help anyone. The march of technological progress is kind of inevitable due to competitive pressures.
Every technology brings important downsides and costs for all the benefits it makes. Such is the nature of civilizational progress. I guess the best we can do is to use political will and wisdom to increase the benefits, and mitigate the costs. And we will likely fail again to do so for a long time. But we did stop adding lead to car fuel eventually at least.
From all the ways I could think of AI-like technology could have been developed, we got quite lucky. The LLM companies really have little moat, are a subject to heavy market competition from the get go, and even determined individuals can run their own LLMs locally. Much better than a hypothetical scenario where a single big-corp discovery gives it a patent and exclusive secret AI tech leading to impossible market advantage and universal monopoly over the whole economy.
Software collaboration dynamic concerns
So anyone can be a coder now, and produce terrible buggy code at the pace of 40 junior developers. And they are all very happy to open a PR to your project.
And now that everyone is LLMing, how do you keep up and ensure the quality doesn't suffer, and people don't waste each other's time with slop.
There are no easy answers here. We will manage, and we will see.
I think the dynamics of SWE collaboration will drastically change. I predict teams will get smaller, and more projects will be developed by individuals leveraging their productivity. But only time will tell.
And LLM-based solutions are desperately needed to help with vetting, reviewing, testing, ensuring code quality, etc. There's definitely a lot of potential and work to be done.