Google executives acknowledged this month they need to do a better job surfacing user-generated content after the recent Reddit blackouts.
Google executives acknowledged this month they need to do a better job surfacing user-generated content after the recent Reddit blackouts.
But if we know that it makes things up and gets things wrong, how can we trust any information it gives us? Fact-checking is one thing, but at that point, you might as well skip the LLM and just look the information up yourself.
At the end of the day you can’t 100% trust anything you see on the internet. You have to think critically about the answers it gives you and cross reference it against other sources. No different than when evaluating search results, which can also be wrong. But it’s a great starting point.
It’s a lot easier to get a thorough and concise answer from chat gpt and double check it than it is to wade through a search engine.