What's new
Christian Community Forum

Register a free account today to become a member! Once signed in, you'll be able to participate fully in the fellowship here, including adding your own topics and posts, as well as connecting with other members through your own private inbox!

Study: AI Search Engines Cite Incorrect Sources at a 60% Rate

Ars Technica reports that the research tested eight AI-driven search tools equipped with live search functionality and discovered that the AI models incorrectly answered more than 60 percent of queries about news sources. This is particularly concerning given that roughly 1 in 4 Americans now use AI models as alternatives to traditional search engines, according to the report by researchers Klaudia Jaźwińska and Aisvarya Chandrasekar.

Error rates varied significantly among the platforms tested. Perplexity provided incorrect information in 37 percent of queries, while ChatGPT Search was wrong 67 percent of the time. Elon Musk’s Grok 3 had the highest error rate at 94 percent. For the study, researchers fed direct excerpts from real news articles to the AI models and asked each one to identify the headline, original publisher, publication date, and URL. In total, 1,600 queries were run across the eight generative search tools.

The study found that rather than declining to respond when they lacked reliable information, the AI models often provided “confabulations” — plausible-sounding but incorrect or speculative answers. This behavior was seen across all models tested. Surprisingly, paid premium versions like Perplexity Pro ($20/month) and Grok 3 premium ($40/month) confidently delivered incorrect responses even more frequently than the free versions, though they did answer more total prompts correctly.

Evidence also emerged suggesting some AI tools ignored publishers’ Robot Exclusion Protocol settings meant to prevent unauthorized access. For example, Perplexity’s free version correctly identified all 10 excerpts from paywalled National Geographic content, despite the publisher explicitly blocking Perplexity’s web crawlers.

Even when the AI search tools did provide citations, they frequently directed users to syndicated versions on platforms like Yahoo News rather than to the original publisher sites — even in cases where publishers had formal licensing deals with the AI companies. URL fabrication was another major issue, with over half of citations from Google’s Gemini and Grok 3 leading to fabricated or broken URLs that resulted in error pages. 154 out of 200 Grok 3 citations tested led to broken links.

More

 
Real, live, human real librarians do a much, much better job than any of that.

Yahoo search engine used to be good when it started because real, live, human real librarians built the database and organized the information. Not library technicians, information specialists, database specialists, coders, etc. Real librarians with Masters' degrees in library science. The point being to organize, catalog, and tag information and data so it could be located and retrieved efficiently, and related information and data could be found and retrieved just as efficiently.

Yahoo went downhill as soon as they stopped using real live human librarians and started using library technicians, and then database specialists, etc., etc., etc. :furious: :mad: :apost: :ban: :headbang: 😭
 
Last night while in a phone conversation with my son he brought up a detail about something we were discussing. I told him that wasn't correct. He said he got the info from an AI search.

We then discussed where AI gets its information from and that so much digitized info is incorrect (but AI doesn't know that) that AI will likely produce an untrue result the majority of the time.
 
Recently I heard a good reflection on why AI can’t replace humans and he gave some good examples. One example is human thought processes can navigate in the moment, like a surfer. While on a big wave the good surfers focus on conditions in that moment, blocking out past experiences and allowing new decisions as minuet changes overcome new challenges. A machine hasn’t got that gut level anticipation instinct that kicks our executive function into overdrive when an unexpected challenge or risk is accepted. In other words AI will never exceed garbage in, garbage out because it’s unable to create a new thought and has no ability to discern false info.

I’m trying to recall the podcast, maybe it was Bret Weinstein?
 
Recently I heard a good reflection on why AI can’t replace humans and he gave some good examples. One example is human thought processes can navigate in the moment, like a surfer. While on a big wave the good surfers focus on conditions in that moment, blocking out past experiences and allowing new decisions as minuet changes overcome new challenges. A machine hasn’t got that gut level anticipation instinct that kicks our executive function into overdrive when an unexpected challenge or risk is accepted. In other words AI will never exceed garbage in, garbage out because it’s unable to create a new thought and has no ability to discern false info.

I’m trying to recall the podcast, maybe it was Bret Weinstein?
Also AI has no morals or conscience so that's problematic in itself.
 
Back
Top