Most companies with a software engineering team are heavily pushing its engineers to use AI to the extent that token usage is part of performance reviews. Most engineers don't have any option other than to toe the line and use AI everywhere.
I've been speaking to engineers spanning industries and company sizes, and the sentiment seems to be consistent across - people seem to have offloaded any thinking work entirely to LLMs.
You report a bug and receive an AI-powered analysis of a component that has nothing to do with the bug. Meanwhile, if a human reviewed the bug report, it would have taken them straight to the root cause. I am not against AI tooling, but it should be a supplement, not a replacement.
There's a similar trend on pull requests as well. People used to write well-crafted commit and PR descriptions and they were a source of knowledge . Nowadays, it's just one-liners appended with a coderabbit summary that just looks like a wall of text. I find them incredibly hard to read and often include incorrect assumptions. Even with people trying to put in the effort, their teams have mandated tooling where AI adds its own summary and we're back to square one.
My point with this rant is that not everything needs to use AI. My personal objections to AI aside, it's not at that stage where everything can be automated and we need to stop pretending that we can. I'm fortunate that my team hasn't been plagued by these issues but these issues affect us all, often quietly behind the scenes. And no, it's not just a case of incorrect tooling or bad prompting.