Code Faster, Understand Less
There's still no such thing as free lunch.
There are a lot of things I could say about AI and how it has changed, or will change, the world and the way people work. But I want to focus on how it has changed, and will change, writing code and maintaining software systems and applications, which is what I get paid to do all day at my job.
I believe that AI tools can positively change the way that software engineers work, but I don't believe that there's about to be some amazing explosion of productivity that turns everyone into a 10x engineer. In fact, I believe that there is a potential for people using AI to do more harm than good when it comes to writing clean, reusable, understandable, maintainable code.
Code ownership
In a team of software engineers (whether that be two or twenty), there's generally going to be some aspect of "code ownership". If Alice writes a function to implement a core application feature, Bob will probably ask Alice about decisions she made while writing the function or for some other useful information, perhaps a few months after Alice writes it. Just think about it: if you see git blame attached to a complex piece of functionality in your team's codebase, you'll probably run to whoever is on the git blame when you have questions or concerns.
Obviously, people move teams, leave jobs, or forget why they wrote something (or even that they wrote it at all - can you check the git blame on that file for me?). But for the most part, ownership of code is something that a lot of developers intuitively understand.
The problem with LLMs and AI tools writing any amount of code is that you did not come up with it. My argument here is that you're probably significantly less likely to understand something that someone told you versus something that you told someone else. I feel like that's hardly even something I need to argue.
"Okay," you might say, "but once the AI writes my code, I take a look at it and make sure I understand it". I have two responses to this claim:
- If you do really understand it, then was the cognitive load of reading code that someone else (the AI) wrote actually less than you writing it yourself? Sometimes, I think the answer to this question could be yes. But I think that most of the time, the answer is probably no.
- If you don't understand it, then we have a problem. Now there is new code being committed to your team's repository that potentially no one understands. Hopefully you can use AI to debug the garbage you just wrote in a few weeks after someone else finds a new defect!
March 20, 2025vibe coding, where 2 engineers can now create the tech debt of at least 50 engineers
One way or another, I think that using AI in most cases where I'd need to write code in my job or in personal projects will lead to a lower level of understanding of how the project works.
However, I also concede that in the "real world", it's not possible to always maximize for deep understanding of how stuff works. Ultimately, I think that you should try to cultivate a deep understanding of the projects you work on, but I also understand that there's cognitive tradeoffs you have to make all the time while working as a software engineer.
Our team needs to use a new tool and I'm onboarding our projects. Should I read 30 minutes of documentation on how this tool works so I have a deep understanding, or should I breeze through the instructions as quickly as possible so I can focus on other things?
I'm a big fan of Cal Newport, and his book Deep Work argues that the ability to focus on cognitively demanding tasks is one of the most important skills you can cultivate as a "knowledge worker". However, I recognize (as does he) that you can't do 8 hours of deep work per day at an office-job (even if there were no meetings). So software engineering, and office-work, generally, is all about deciding what tasks you need to focus deeply on and what tasks you can maybe spend a little less energy on. Maybe you decide to write code for an hour after standup and then do less cognitively demanding tasks later in the afternoon.
My point here is that there is no way that AI is writing your code, saving you the cognitive load of creating software yourself, while also providing the understanding in your brain of how the code works. There is no free lunch.
However, I recognize that there are plenty of software engineers who need to meet deadlines, and now have expectations from their employers that they are using AI tools to "be more efficient". So I understand that sometimes there is pressure to commit code that you're not confident about - which was the case even before AI code generation tools launched. The entire trend of vibe coding clearly exists for people who care more about shipping a product as fast as possible over building something that is maintainable, which I understand is reasonable for specific circumstances.
I think that the ultimate fork in the road here is how deeply developers care about understanding the things they build. Obviously, you can't always have the lowest level understanding of how everything in your code works, or you won't be able to actually build things. There's a balance to be struck. But I'm afraid that AI generated code encourages developers to "turn their brain off" a little too much.
Code autocomplete - get out of my brain!
The worst part of AI coding tools is the IDE-embedded ones that start screaming at you as soon as you type export const
at the top of a new file. How could you possibly know what I want? Line autocomplete genuinely feels like you're pair programming with the most annoying person on earth. As soon as you type a new letter, they tell you exactly what you need to type next - only it's worse than just hearing it. You see it write there in your IDE, so now you can't even remember what the hell you were going to write in the first place.
The good parts
By now, you probably think that I hate AI. But I don't. I think it can be a really useful tool for learning. Here are the ways that I use AI for coding in my job and personal projects.
Asking for help manually
I mentioned above that I hate the autocomplete features. But prompting the AI for very small snippets as refreshers on certain concepts (like sed
or grep
) and then having the model explain why it formatted the snippet the way it did is genuinely a better use of my time than reading man sed
output and checking StackOverflow for why my 1 character typo is making me lose my mind. (In this way specifically, I've come to view LLMs as smart but imperfect linters of a sort?)
Learning
The simpler and less technology, the better for learning with AI, but generally I've found 'conversations' with LLMs to be incredibly useful. Having a record of my thoughts and ChatGPT's responses while I learn about dependency version resolution in npm is significantly more useful than 8 google searches, 14 open tabs, and no definitive record of my train of thought (or the information I found)
Final Thoughts
Looking at the post I just wrote, I understand it may look like I have a very negative opinion on using AI in software engineering. I don't. I am excited to use AI tools and see how they evolve, I'm just bearish on the value prop for them. I think they will change the way people write code, but not in some world-altering way. If my thoughts here seem unorganized, it's because they are. My opinion on AI continues to change every time I use it, and every time it gets "better", which is quite frequently. I think that AI tools have the capacity to create massive amounts of technical debt (cool, more job security!) and that they generally have the potential to do more harm than good when wielded with no reservation, but I do think they have value and think that engineers should learn to use these tools in a responsible way. Ultimately, I believe engineers and teams should continually have discussions with each other, technical leadership, and anyone concerned about developer productivity to understand what's best for each team.