Practically all LLMs aren't good for any logic. Try to play ASCII tic tac toe against it. All GPT models lost against my four year old niece and I wouldn't trust her writing production code ๐คฃ
Once a single model (doesn't have to be a LLM) can beat Stockfish in chess, AlphaGo in Go, my niece in tic tac toe and can one-shot (on the surface, scratch-pad allowed) a Rust program that compiles and works, than we can start thinking about replacing engineers.
Just take a look at the dotnet runtime source code where Microsoft employees currently try to work with copilot, which writes PRs with errors like forgetting to add files to projects. Write code that doesn't compile, fix symptoms instead of underlying problems, etc. (just take a look yourself).
I don't say that AI (especially AGI) can't replace humans. It definitely can and will, it's just a matter of time, but state of the Art LLMs are basically just extremely good "search engines" or interactive versions of "stack overflow" but not good enough to do real "thinking tasks".
Play ASCII tic tac toe against 4o a few times. A model that can't even draw a tic tac toe game consistently shouldn't write production code.