For django-31056, they claim the AI-generated patch is "incomplete" because it's "missing critical parts of this logic, such as the try-except block and the check for a running event loop.". But if you look at the diff, that's clearly wrong. The try-except block and running check were already there before the patch. The human patch just indented them, making them appear as both - and +, while the AI patch didn't. To me, the AI patch seems correct. It's slightly less efficient than the human patch when DJANGO_ALLOW_ASYNC_UNSAFE is set, but slightly more efficient when it isn't (which is the common case!). The human patch does feel more natural, but the AI patch is fine. I'd grade it a tie between human and AI.
For django-32517, they claim that the human and AI patches "produce entirely different outputs", but actually they do exactly the same thing. The human version has `reversed(self.dict)`, while the AI version has `reversed(self.dict.keys())`. `reversed` treats the object as an iterator, and iterating over a dictionary in Python just gives you the keys, so it doesn't matter whether you call `.keys()` first. The human patch is more idiomatic, but it's also more confusing, as shown by the fact that it confused the authors of this paper. I'd grade it another tie.
Edit: I tried to sign up for OpenReview so I could leave a comment about this, but the system wouldn't let me register without completing a form that assumes you have an academic position. Perhaps I should email the authors.
According to the paper:
> 1. Solution leak: represents instances where the solution to the issue is clearly outlined in the issue description or comments on GitHub. Since both the issue descriptions and comments (referred to as hints_text in the SWE-Bench study) are provided as input to the models, these LLM models can extract the solutions directly from this information instead of generating it independently.
And yet, the SWE-Bench authors themselves explicitly state:
> In short, for participating on the SWE-bench leaderboard, using hints_text in any manner is not allowed. Although we don't explicitly say this in the original paper, we also do not make any mention of using the hints_text anywhere.
So, it's a made up issue that would only occur if you deviated from the paper implementation and explicitly added a field called "hints" that isn't used anywhere.
[1] Don't ask me why they cited the issue number, 16669, instead of the pull request number, 16766, when only the latter appears in the dataset. This confused me for a bit.
IMHO, it is probably better to discard this paper, and wait for someone else to cover this important topic.
I've been playing around with some automated code review tools recently, and it's surprising how often they flag things that are technically correct but just... unusual. Style matters, especially for maintainability.
This matches my intuition about the coding performance of these models a lot better. I don't think any current coding benchmark accurately measures coding performance.
In my case, I would guess less than 10% of the code I get out of AIs is useful.
What sort of code are you getting those results with? Is it yet-another-react-frontend-button? Is it ebpf programs? Is it a parser in rust?
For the latter two, I've found AI to have pretty low rates, and for the former I haven't had the desire to try.
1. Very greenfield work where the LLM doesn't really have a lot of constraints to deal with and can fully control the setup + doesn't have to ingest a lot of existing context 2. Very small projects that largely follow established patterns (CRUD, frontends, etc.) 3. Well established implementation work (the kind of feature that's a simple JIRA ticket).
In my experience they're painfully bad at:
- Novel/niche work where there aren't really answers online to what you're trying to do - Complex refactoring - Architecting within existing constraints (other systems, etc.)
it's gonna autocomplete: if err != nil { return fmt.Errorf("%w: could not foo: %v", err, thing) }
What I mostly enjoy using it for is just writing bash scripts for me. I hate writing bash but Claude is excellent at writing the scripts I need.
AI isn't writing software features or anything close to that for me at the moment. But what it is great at is just being a really excellent intellisense. Knowing what you're likely to want to do in the next ~5 lines and just filling it out in one button press. Things like intellisense and automatic refactoring tools were big productivity improvements when they became ubiquitous. AI will be the same for most people, an intellisense on steroids.
Also, writing tests. Writing tests can be quite mundane and boring. But I can just type out what I want tested, give it some files as context and it can be pretty good at generating some tests.
Does AI get it right every time? No way. But, as a developer, I'd rather spend 10 minutes trying to coax an AI into generating me 90% useable code for some boring task than spend 20 minutes typing it out myself. Often, I probably could write the code faster than I could prompt an AI, but being lazy and telling something else to do the work feels pretty good and relaxing.
That's what I tend to find with English writing as well. It's not great. But sometimes you just need decent generic prose for an introduction or an explanation of something. If you know enough to adjust as needed, it can save time for something that readers are probably just skimming anyway. As I've written previously, about a year ago I was working on cleaning up a bunch of reference architectures and I used Google's Bard in that case to give me a rough draft of background intros for some of them which I modified as needed. Nothing miraculous but saved me a bit of time.
Similar. I've got a joke language project on the back burner, doing it properly requires going back over my 23 year old university notes on yacc etc., so I tried AI… the AI just makes a mess of it*.
For anything front end, even the original ChatGPT-3.5 model is basically magic (i.e. sufficiently advanced technology).
* I think the last time I touched it was just before o1 was announced; as o3 is now in the free tier of ChatGPT, I should try again…
There's a few languages/tools I use often but am not an expert in and have been using Claude 3.5 to help me work with existing code. On paper this is a perfect use case. In practice it's like working with an intern that has google in front of them and enough jargon to convince me what they're saying isn't bullshit. Eventually, I'll be able to coax the answers I need out of it.
I'll say though the fact AI can't say "I don't know" and closely related "that is not possible in the context you've given me" combined with the inability to reason is what gives you results that look OK but are subtly trash.
Instead, people squint their eyes at scrolling matrix text and convince themselves it must be true.
And my gut tells me they are the worst for the kinds of long-established software conglomerates many professionals work at, which have tons of internal services, integrated acquisitions, etc. etc.
Ultimately the AI is good at what the average developer online is good at, probably full-stack web dev of projects from scratch.
where's the value everyone on this site and on LinkedIn (but NONE in my real or professional life) seems to get?
I feel like I'm being gaslit when people say Cursor writes 80% of their code, and honestly, it's the conclusion that makes the most sense to me -- the people making these posts must be well-invested in the startups that stand to profit if AI is actually as good as they say. You know, shills.
I also have access to a full-service "junior developer" AI that can take in an entire git repo at once, and its code outputs are significantly less useful -- maybe 10%.
I think a lot of peoples' success rate with AI boils down to their choices in language/toolkit (AI does much better the more common it is) and how they prompt it.
Note that you still need an experienced set of eyes supervising, the thought of an LLM committing to a git repo without a human in the loop scares me.
> where's the value everyone on this site and on LinkedIn (but NONE in my real or professional life) seems to get?
I can remember how to describe that every time I need to make a button. I can’t remember the new flavor of the months special snowflake way of expressing that. I’ve had decent traction just listing the pieces in my stack and then subbing those out whenever it changes
I don't understand the notion that it is faster to generate repetitive code with keyboard macros. I use Vim-mode exclusively, and while I'm not a Vim master, I don't think there's any set of macros that will do what Copilot can do.
It's not that Copilot is smart. It's that 60% of what I do doesn't require much intelligence to anticipate. It is the 40% that matters, the remainder can be trivially guessed, and this is exactly what Copilot does.
Maybe this will help: you need to imagine with an AI intellisense that with each keystroke, you are collapsing the possibility space down to a smaller, finite number of outcomes. You write exactly what code you need for the dumb AI to predict the rest of it.
There are a LOT of reasons why AI intellisense is not all there yet; it can be distracting; it can try to generate too much at once; none of the tools have LSP integrated, so it will provide bullshit suggestions of library methods that don't exist. This is all true, and yet it is still highly valuable in some domains, for some people.
That said, if you write x86 assembly for a living, you are probably out of luck.
(I write Kotlin, Java for Android apps and services, C++ that is tightly integrated with the SoC. Python and Bash for command-line tools that invoke REST APIs. Copilot is useful for these domains.)
The discussion is more around highly autonomous AI "coders" (cursor, cline/roocode, (open)devin, etc.)
I’ve sat through some interviews recently with candidates who started their careers in the last 6 years or so… during the boom cycle. Some were quite good but a troubling amount were clearly over-leveled at their current/previous employers.
For example, last month we interviewed someone for a Staff Engineering role (current role: L5 Senior II engineer), for Python. This person was unable to explain what a set was in Python, didn’t seem to grok the basic HTTP request/response pattern etc. This wasn’t a leetcode interview; it was an engineering conversation. It was the same questions we’d given dozens and dozens engineers in the past. It wasn’t a language barrier issue (guy was American, interviewer was American). Dude just seemed to have a very very narrow set of skills.
For people like this I imagine AI feels like a superpower.
Problem is they don't know enough to really assess if what the LLM is spitting out is any good or not so they claim amazing wins.
I mostly agree with you, but I do think it's faster than searching for and finding the boilerplate you need. I also think AI code completions and the ability to use it to generate the small blocks you will put together into the main app are helpful. Idk, it's not a nothing burger. It's not going to start working at AWS either.
Training loops, sure... those are pretty much straight pattern recognition w/ well-represented APIs. But more broadly? Not so much.
I find the models very useful to chat about library documentation or high level algorithm concepts, but I find the code it generates to be… I don’t know how else to say it… really bad and often out of context.
I know developers who blindly follow the hype and use them to generate production code. That scares the poop emoji out of me, and the code reads like an asset flipped 3D game.
Matches my experience pretty well as too. It'll usually output something that a novice would assume is correct but an expert can clearly identify as "know it all teenager forum post" level stuff.
It also goes to how a lot of people misunderstand the replication crisis. 'Hard science' really should replicate - we should be able filter out sources fo error and variance because the phenomena (generally) isn't affected by our attempts to measure it. Making social science replicate often requires so much control that it is deabstracted from reality, meaning the effort at replication reduces the value and usefulness of the knowledge. Generalizable claims are hard because the sources of variance are so much larger adn more complex. Speaking as someone who went through a transition from engineering to social sciences, it is the concept that made it hard. I started my time in social sciences with a cool idea of a whole carrer based on just doing replication studies, because science. That was...useful and stupid at the same time.
OAI, xAI, Antropic, Google all score incredibly well, then you go to try and write code and its just okay.
They claim it can do PHD level reasoning, but here I am not trusting it on basic computational thinking.
Not sure that's really the claim. I think they claim that performance on benchmarks like GPQA indicate PhD level knowledge of different fields.
People don't know what questions to ask.
1. Did the benchmark authors not review the issues and make sure the solution was not present in the issue?
2. Are the issues locked after they’re included in the dataset? You’d think they would be immutable for reproducibility.
3. For the agents writing patches, is test running part of their inner loop validation? If they write a patch that makes the test pass, then the jobs done. Or is that validation step kept secret from the agent? I don’t see how unless the tests aren’t part of the repo.
I looked at a bunch of issues in the dataset when SWE-verified first game out and I was trying to make scaffolding to solve it and I don't remember a single time where the solution existed verbatim in the issue. I'm not saying it never happens, but it would have to be rare.
> 2. Are the issues locked after they’re included in the dataset?
No one changes the issues in the dataset but of course the original issue on github will have been resolved long ago. The models don't have access to this in their context, but if they were trained on github there's a very real risk that they've seen the solution.
> 3. For the agents writing patches, is test running part of their inner loop validation? If they write a patch that makes the test pass, then the jobs done. Or is that validation step kept secret from the agent? I don’t see how unless the tests aren’t part of the repo.
The tests aren't provided to the model, they are run after the model has proposed its final answer.
> Whether we consider the issue description to be underspecified and hence unfair to be testing on. > Whether the FAIL_TO_PASS unit tests filter out valid solution
and a bit more. This is pointed out in the linked paper too.
The moral of the story to me is that, don't believe the paid human annotator. You can (hopefully) still believe the PhD students doing these unpaid jobs as their research ;-)
[1] https://openai.com/index/introducing-swe-bench-verified/
Every quarter, you have a couple thousand volunteers provide 2 GitHub issues from the past 3 months, which are nontrivial to resolve, and where there exists strong test cases. Each volunteer then cross-checks 2 issues from other volunteers. The volunteers get 1 month free subscription to some AI service in return.
This dataset is then published as SWE-UberBench-2025-02 or something. People can then only evaluate their coding LLM on datasets published after their training period.
Looking at the benchmark, https://www.swebench.com/, about half of scored submissions score under 1/3 correct? So they're either not cheating, or not cheating effectively?
anyways, another interpretation is that the model needs to also make a decision on if the code in the issue is a reliable fix or not too
Smaller ones don't.
Try the largest llama models, and phrase your prompt like a sentence to be completed instead of you asking a question.
If anyone can find a better title (i.e. more accurate and neutral, preferably using language from the article itself) we can change it again.
It's so vital that it's not leaked and that it's fit-for-purpose and manually assessed. These general purpose, public benchmarks based on questionable metrics are effectively worthless to assess real programming skill.
Case in point, as others have mentioned here, Claude scores modestly on these benchmarks but vastly better than the alternatives in practice. I don't trust Claude fully but far more than OpenAI models; it's not even close. The IRL performance advantage is not reflected in any of these benchmarks.
Instead of resolving it, some leaders are further complicating their meaning
Such as OpenAI grading their benchmarks based on "how much money they made" or "how easy a model was convinced to hand over fake money".
Or, as in the case of LLMs and benchmarks: When a benchmark becomes a target, it ceases to be a good benchmark.
This is fine, many of my real tickets already explain the solution. A good ticket often offers a solution or where to start looking.
To me the analysis of SWE-Bench is a solid contribution and informative. My guess is that to meet conference's submission bar they had to come up with their own bench (SWE-Bench+), which wasn't thorough enough and the paper got rejected mainly because of that.
I always tell my customers to ignore benchmarks and compare outcomes with their own workloads. Benchmarks are almost completely useless in the real world.
Is this what Hofstadter means by a strange-loop?
LLMs by contrast are not designed to just repeat what's already in the instructions, no matter which stance on LLM design you subscribe to
* exceptions apply
1) No known solutions, so there's no "ground truth" dataset to train on
2) Presumably hard to solve
3) But easy to verify a solution if one is provided.
This, of course, is easier done on the STEM side of things, but how do you automatically test creativity, or philosophical aptitude?