Bad developers just do whatever. It doesn't matter if they wrote the code themselves or if a tool wrote it for them. They aren't going to be more or less detail oriented whether it is an LLM, a doxygen plugin, or their own fingers that made the code.
Which is the problem when people make claims like that. It is nonsense and anyone who has ACTUALLY worked with early career staff can tell you... those kids aren't writing much better code than chatgpt and there is a reason so many of them have embraced it.
But it also fundamentally changes the conversation. It stops being "We should heavily limit the use of generative AI in coding because it prevents people from developing the skills they need to evaluate code" and instead "We need generative AI to be better".
It was the exact same thing with "AI can't draw hands". Everyone and their mother insisted on that. Most people never thought about why basically all cartoons are four fingered hands and so forth. So, when the "studio ghibli filter" was made? It took off like hotcakes because "Now AI can can do hands!" and there was no thought towards the actual implications of generative AI.
Probably?
This is the kind of thing that a LOT of companies outsource. Mostly for ill.