2 points | by thunderbong15 hours ago
An LLM cannot exceed the mastery present in its training data. Realistically it will always be biased below the maximum mastery present in the training data, by virtue of the presence of less-than mastery in that data. The limits of the LLM will always be found in its training data, which begs the question if the maximum human mastery even appears in that training data to begin with.
It might be reasonable to assume that a sufficiently specialized LLM will be able to output code equal to the then-best human coders whose work is sampled in the training data, it is not particularly reasonable to assume that LLMs (absent some currently sci-fi emergence) will exceed the very best human practitioners, especially those whose work isn’t publicly-discoverable.
It’s even less reasonable to assume LLM output won’t drive better human output.