After working on a team that uses LLMs in agentic mode for almost a year, I'd say this is probably accurate.
Most of the work at this point for a big chunk of the team is trying to figure out prompts that will make it do what they want, without producing any user-facing results at all. The rest of us will use it to generate small bits of code, such as one-off scripts to accomplish a specific task - the only area where it's actually useful.
The shine wears off quickly after the fourth or fifth time it "finishes" a feature by mocking data because so many publicly facing repos it trained on have mock data in them so it thinks that's useful.
With shit like this going on, right-wingers who pretend it's the left censoring speech can fuck themselves sideways with a pineapple.