AI is fine. Not knowing what it wrote is not.
I am not anti-AI. I use it every day, and the work I ship is better and faster for it. What I am against is the thing I see happening to a specific cohort of newer developers — the ones who started coding one or two years ago, in the middle of the AI boom, and who never built the muscle that comes before the AI does anything useful.
That muscle is reading code. Debugging code. Sitting with a bug for an uncomfortable hour and walking out of it actually understanding why it happened. If you skipped that part, AI does not make you faster. It makes you a printer for code you cannot read, and a printer that confidently prints the wrong thing is a worse tool than a slow human who can tell when something is off.
I have been writing code for eleven years. The reason I can use AI well now is that I spent the first nine without it. I know what a fishy stack trace looks like. I know what “this function is doing too much” feels like before I can articulate why. When the model hands me forty lines, I am not accepting them — I am reviewing them, the same way I would review a teammate’s PR, and I reject things constantly.
The cohort I worry about cannot do that review. They paste, they run, it works, they move on. When it breaks — and it always breaks eventually — they paste the error back into the chat and hope. There is no model under the hood. There is no instinct for where bugs live. There is just a slowly growing pile of code nobody, including the person who shipped it, actually understands.
This is not a moral failing. It is a training-data problem in their own heads. They are missing the years of exposure that turn raw output into a thing you can evaluate, and the only fix is the boring one: write code without the model sometimes, on purpose, and sit with what is hard about it. Not because AI is bad. Because being able to read what it gives you is the entire job now.