This is in line with my own personal experience with LLMs and non-trivial questions. They’re excellent when answering questions on topics you know nothing about, and somehow embarrassingly wrong when you actually know the answer yourself…
It’s not clear to me why we’re still trying to encode all of human knowledge in a single model, instead of teaching the model how to look for answers from an external source (e.g. RAG).
"The benchmark includes questions where at least one LLM confabulated, in order to minimize the number of questions requiring human assessment. Because of this, and since the questions are intentionally adversarial, the absolute percentage should not be used to infer that LLMs frequently confabulate. This leaderboard does not reflect a "typical" hallucination rate."
> instead of teaching the model how to look for answers from an external source (e.g. RAG)
My benchmark specifically focuses on the RAG use case. Even with provided texts, current models still hallucinate.
I stopped playing with larger models and have been pushing smaller models with this improvised system prompt and getting good results. It seems like it forces the model to do multiple passes before giving you any response.
My smaller local models give me less hallucinations than Meta.ai, for example, which generally spits out pleasing answers almost immediately (which are often hallucinations, since I don’t think it is system prompted to be adversarial to the user, or itself). I don’t have the same hallucination issue with Llama3 - 8b locally because of custom system prompts.
The model has all the correct information, so it almost needs to do RAG on itself. Multiple passes on itself seems like a way to do it.
(Disclosure: I have not tried your prompt)
This is probably the truth behind the black magic I’m imagining. You could have it explicitly spit out this process, in which case you would see it’s first rough draft, followed by a “My first paragraph is probably wrong”, followed by a third paragraph where it attempts to fix the first paragraph. There is no outside RAG in this process.
The mumbo jumbo part of all this is that I’ve told it to “hide” this process from the user where it doesn’t explicitly output anything but its final answer, and the accuracy has been just as good (for my use case at least).
:Shrugs:
Doesn't this provoke o1 to spend more time doing COT, and therefore increase the cost per query?
I forgot the name of this phenomenon with humans, described it to o1 and it gave the correct answer - Gell-Mann Amnesia Effect [1]
"Briefly stated, the Gell-Mann Amnesia effect is as follows. You open the newspaper to an article on some subject you know well. In Murray's case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. I call these the "wet streets cause rain" stories. Paper's full of them.
In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know."
– Michael Crichton (1942-2008)
[1] https://www.epsilontheory.com/gell-mann-amnesia/ Who received the IEEE Frank Rosenblatt Award in 2010?
Who was awarded the Oceanography Society's Jerlov Award in 2018?
What's the name of the women's liberal arts college in Cambridge, Massachusetts?
In whose honor was the Leipzig 1877 tournament organized?
According to Karl Küchler, what did Empress Elizabeth of Austria's favorite sculpture depict, which was made for her villa Achilleion at Corfu?
How much money, in euros, was the surgeon held responsible for Stella Obasanjo's death ordered to pay her son?
As it has been for decades now, the 'Nan' type of answer in NLP is important, adds great capability, and is often glossed over.
• LLMs, at least GPT models, tend to overstate their confidence. • A frequency-based approach appears to achieve calibration closer to the ideal.
This kinda passes my vibe test. That said, I wonder—rather than running 100 trials, could we approximate this by using something like a log-probability ratio? This would especially apply in cases where answers are yes or no, assuming the output spans more than one token.
> SimpleQA was created to be a greater challenge for frontier models (e.g., GPT-4o scores less than 40%).
"To be included in the dataset, each question had to meet a strict set of criteria: ... and most questions had to induce hallucinations from either GPT-4o or GPT-3.5."
"I seem, then, in just this little thing to be wiser than this man at any rate; that what I do not know I do not think I know either." - Socratos, from Plato's Apology of Socrates
> SimpleQA is a simple but challenging benchmark for evaluating the factuality of frontier models. A main limitation in SimpleQA is its scope—while SimpleQA is accurate it only measures factuality under the constrained setting of short, fact-seeking queries with a single, verifiable answer. Whether the ability to provide factual short answers correlates with the ability to write lengthy responses filled with numerous facts remains an open research question.
OpenAI going to have some rounds of layoffs in the future.The steps I took to find this link:
1) Look at simpleqa_eval.py. See that it loads "az://openaipublic/simple-evals/simple_qa_test_set.csv" Hmm, some weird vendored protocol.
2) I don't feel like digging through bf.BlobFile() to figure out how it downloads files and I certainly don't want to generate an API key. Cross fingers and do a Bing web search for "az://openaipublic"
3) That leads me to https://stackoverflow.com/questions/76106366/how-to-use-tikt... Ah ha, this answer has the link https://openaipublic.blob.core.windows.net/encodings/cl100k_... which automatically downloads a file.
4) Poke the relevant parts of the az:// link into this link, and a csv appears.
Why not? Just train an unbelievably gigantic LLM that encodes all human knowledge. A hundred trillion parameters ought to do it.