3 points | by andsoitis9 hours ago
But without joking, I cannot imagine this to be true, cause imo current LLM's coding abilities are very limited. It definitely makes me more productive to use it as a tool, but I use it mainly for boilerplate and short examples (where I had to read some library documentation before).
Whenever the problem requires thinking, it horribly fails because it cannot reason. So unless this is also true for google devs, I cannot see that 25% number.
Over time you learn to work with the style of LLMs and architect stuff in an even more obvious predictable way, which seems to reduce bugs by a lot and make adding features easy, since the AI is trained on established codebases and knows what I might need later, rather than just making tiny spike solutions I have to rewrite.