Dear all!
As I am quite new to all this, maybe a very noob question. I prompted bing, bard and chatgpt (3.5) with the same question. Bing just straight up answered different questions but delivered sources I could check. Bard and chatgpt answered my questions but just invented (all) sources. Just made up randomised authors and title names. Bard delivered links to said scientific articles, but when you followed the link the articles in question were completely different.
-
How can you I trust delivered results, when the sources are made up?
-
And also: why? Why didn’t it say for example there are no meta-analyses?
-
is it better in the payed version from chatgpt?
Thanks in advance!
https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)
https://machinelearningmastery.com/a-gentle-introduction-to-hallucinations-in-large-language-models/
In short: Language models are not search engines or databases. They make up text. Hallucinations are unavoidable.
You can’t trust them. This is still matter of research. Maybe it’ll get better in a few years once researchers found good ways to mitigate this.
Awesome, screened trough the wiki page. Need to delve into this!
AI is super fascinating. Especially those Large Language Models (LLM) that became fashionable in recent times. You can read more about running them yourself (with a decent computer) and tinkering around on these two other lemmy communities I like: