9 points | by muglug6 hours ago
PDF of Earnings (29 points) https://news.ycombinator.com/item?id=41988811
Coverage from NYT (10 points) https://news.ycombinator.com/item?id=41989256
Coverage from Verge (7 points) https://news.ycombinator.com/item?id=41989674
Coverage from CNBC (2 points) https://news.ycombinator.com/item?id=41989727
"We're also using AI internally to improve our coding processes, which is boosting productivity and efficiency. Today, more than a quarter of all new code at Google is generated by AI, then reviewed and accepted by engineers. This helps our engineers do more and move faster."
Reviewing an AI's code strikes me as a pretty easy way to end up with a bunch of Heisenbugs and a team of developers that don't fully understand their codebase. If the internal expectation is a "25%" increase in development velocity a lot of engineers will just accept PRs with LGTM. If they heavily reviewed the code it might be perfectly fine but their number won't reflect the expected improvements and their quarterly or yearly reviews will suffer.
You also can't go query that AI about why they wrote the code the way they did, not only is there inherent variability in response to prompts but an updated model might have entirely different responses with no way back to the original "author".