Hi
@Bravo,
this is what our CTO Tony Lewis, who deals with LLMs professionally on a daily basis, thinks about Open AI’s claim that “GPT-5 is significantly less likely to hallucinate than our previous models”:
https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-472257
View attachment 90845
Your own opinion about GPT-5 is that “it has improved noticeably over its predecessor”, from which we can infer you must also believe hallucinations are now a much rarer issue, as this alleged improvement has been one of OpenAI’s main selling points with regard to their latest model.
In addition, you appear to be extremely confident about having the expertise to weed out the occasional hallucinations in the ChatGPT replies you get. At least those that relate to ChatGPT’s “main points”, which you claimed you would fact-check before sharing here on TSE “to ensure it’s not hallucinating”.
(How about hallucinations relating to minor points, though? Even inaccuarate minor points can distort our interpretation of things.)
https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-472178
View attachment 90841
So how come the following hallucination escaped your watchful eye?
Did you not consider
ChatGPT’s claim that Senator Cindy Hyde-Smith sits on the Defense Subcommittee to be one of the “main points” of the LLM’s “in-depth explanation” that would require verification before posting?
Let me help you with the fact-checking, then:
https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-471645
View attachment 90842
FACT: No, Cindy Hyde-Smith does
not sit on the Defense Subcommittee of the Senate Appropriations Committee.
Exhibit A:
United States Senate Committee on Appropriations
www.appropriations.senate.gov
View attachment 90843
Exhibit B:
www.hydesmith.senate.gov
View attachment 90844