What's new
Christian Community Forum

Register a free account today to become a member! Once signed in, you'll be able to participate fully in the fellowship here, including adding your own topics and posts, as well as connecting with other members through your own private inbox!

Study: AI Search Engines Cite Incorrect Sources at a 60% Rate

Ars Technica reports that the research tested eight AI-driven search tools equipped with live search functionality and discovered that the AI models incorrectly answered more than 60 percent of queries about news sources. This is particularly concerning given that roughly 1 in 4 Americans now use AI models as alternatives to traditional search engines, according to the report by researchers Klaudia Jaźwińska and Aisvarya Chandrasekar.

Error rates varied significantly among the platforms tested. Perplexity provided incorrect information in 37 percent of queries, while ChatGPT Search was wrong 67 percent of the time. Elon Musk’s Grok 3 had the highest error rate at 94 percent. For the study, researchers fed direct excerpts from real news articles to the AI models and asked each one to identify the headline, original publisher, publication date, and URL. In total, 1,600 queries were run across the eight generative search tools.

The study found that rather than declining to respond when they lacked reliable information, the AI models often provided “confabulations” — plausible-sounding but incorrect or speculative answers. This behavior was seen across all models tested. Surprisingly, paid premium versions like Perplexity Pro ($20/month) and Grok 3 premium ($40/month) confidently delivered incorrect responses even more frequently than the free versions, though they did answer more total prompts correctly.

Evidence also emerged suggesting some AI tools ignored publishers’ Robot Exclusion Protocol settings meant to prevent unauthorized access. For example, Perplexity’s free version correctly identified all 10 excerpts from paywalled National Geographic content, despite the publisher explicitly blocking Perplexity’s web crawlers.

Even when the AI search tools did provide citations, they frequently directed users to syndicated versions on platforms like Yahoo News rather than to the original publisher sites — even in cases where publishers had formal licensing deals with the AI companies. URL fabrication was another major issue, with over half of citations from Google’s Gemini and Grok 3 leading to fabricated or broken URLs that resulted in error pages. 154 out of 200 Grok 3 citations tested led to broken links.

More

 
Real, live, human real librarians do a much, much better job than any of that.

Yahoo search engine used to be good when it started because real, live, human real librarians built the database and organized the information. Not library technicians, information specialists, database specialists, coders, etc. Real librarians with Masters' degrees in library science. The point being to organize, catalog, and tag information and data so it could be located and retrieved efficiently, and related information and data could be found and retrieved just as efficiently.

Yahoo went downhill as soon as they stopped using real live human librarians and started using library technicians, and then database specialists, etc., etc., etc. :furious: :mad: :apost: :ban: :headbang: 😭
 
Last night while in a phone conversation with my son he brought up a detail about something we were discussing. I told him that wasn't correct. He said he got the info from an AI search.

We then discussed where AI gets its information from and that so much digitized info is incorrect (but AI doesn't know that) that AI will likely produce an untrue result the majority of the time.
 
Recently I heard a good reflection on why AI can’t replace humans and he gave some good examples. One example is human thought processes can navigate in the moment, like a surfer. While on a big wave the good surfers focus on conditions in that moment, blocking out past experiences and allowing new decisions as minuet changes overcome new challenges. A machine hasn’t got that gut level anticipation instinct that kicks our executive function into overdrive when an unexpected challenge or risk is accepted. In other words AI will never exceed garbage in, garbage out because it’s unable to create a new thought and has no ability to discern false info.

I’m trying to recall the podcast, maybe it was Bret Weinstein?
 
Recently I heard a good reflection on why AI can’t replace humans and he gave some good examples. One example is human thought processes can navigate in the moment, like a surfer. While on a big wave the good surfers focus on conditions in that moment, blocking out past experiences and allowing new decisions as minuet changes overcome new challenges. A machine hasn’t got that gut level anticipation instinct that kicks our executive function into overdrive when an unexpected challenge or risk is accepted. In other words AI will never exceed garbage in, garbage out because it’s unable to create a new thought and has no ability to discern false info.

I’m trying to recall the podcast, maybe it was Bret Weinstein?
Also AI has no morals or conscience so that's problematic in itself.
 
The content of the following reaction clips excerpted from an email I received today from The Substack Post gives even more proof that AI definitely cannot be trusted. This through a truly embarrassing incident for the Chicago Sun-Times. As an added bonus, this story not only speaks to the reliability of AI but to the state of journalism in America--
_______

Lincoln Michel: Over the weekend, the Chicago Sun-Times—a storied and award-winning newspaper and longtime home of Roger Ebert—published a summer reading list. Almost all the books were fake. There is no Nightshade Market by Min Jin Lee, Boiling Point by Rebecca Makkai, The Last Algorithm by Andy Weir, or The Rainmakers by Percival Everett, among other invented titles.

Parker Molloy: Of the fifteen books recommended in the list, a full ten of them are entirely made up.

Lincoln Michel: The article was not only generated by ChatGPT (or similar program), but clearly unedited. No one at the Chicago Sun-Times even bothered a cursory check. And not only the Sun-Times. The article, along with other seemingly AI-generated pieces, were syndicated in multiple newspapers across the country including the Philadelphia Inquirer.

Dan Epstein: I get that standards in the field have slipped across the board, but this is a disgrace.

Parker Molloy: Marco Buscaglia, who created the content, admitted that the list was AI-generated. “I do use AI for background at times but always check out the material first. This time, I did not and I can't believe I missed it because it’s so obvious. No excuses,” Buscaglia told 404 Media. “On me 100 percent and I'm completely embarrassed.”

Stewart Mason: The newspaper’s equally lame defense was that the insert the reading list appeared in was an advertorial that came from King Features, a subsidiary of the Hearst Corporation that mostly syndicates comic strips and puzzles. No one at the Sun-Times claimed responsibility for allowing it into the Sunday paper.

Ted Gioia: Why are they publishing garbage without vetting it?

Parker Molloy: The most telling aspect of this story isn't the AI failure itself—we all know AI hallucinates facts—it’s the context in which it happened.

Teddy (T.M.) Brown: The Chicago Sun-Times list is much more about the impacts of media consolidation than artificial intelligence.

Stewart Mason: It is no accident that this happened less than two months after their parent company, Chicago Public Media, laid off 23 journalists and editors in a mass layoff that reduced CPM’s total staff by 20 percent
 
:furious: :mad: :apost: :ban: :cry:


In 1992, Ross Perot said that "giant sucking sound" was all the jobs being sucked into Mexico by NAFTA

Today, that "giant sucking sound" is all the jobs being sucked up by AI

When will people ever learn? :tap:


:furious: :mad: :apost: :ban: 😭


Ross Perot during 1992 Presidential Debate "giant sucking sound" quote


Ross Perot Dies at 89 Memorable Clips

 
Back
Top