कोशिश गोल्ड - मुक्त

STUDY FINDS LARGE LANGUAGE MODELS STRUGGLE TO DISTINGUISH FACTS FROM BELIEFS

AppleMagazine

|

November 07, 2025

A new academic study has found that large language models (LLMs), including leading systems developed by major technology companies, continue to struggle when asked to differentiate between verifiable facts and human beliefs.

STUDY FINDS LARGE LANGUAGE MODELS STRUGGLE TO DISTINGUISH FACTS FROM BELIEFS

The research, published this week by the University of Cambridge, examined how AI models interpret statements about the physical world, historical events, and social norms, concluding that even advanced systems often conflate truth with consensus or opinion.

Researchers tested multiple state-of-the-art language models under controlled conditions designed to probe their internal understanding of factual accuracy. When presented with prompts requiring clear factual reasoning—such as “The Earth orbits the Sun” versus belief-based statements like “Some people believe the Earth is flat”—the systems frequently blurred the distinction, returning responses that reflected popular sentiment rather than objective truth.

HOW THE EXPERIMENT WAS CONDUCTED

The study used a methodology combining structured prompts, human benchmarking, and logical reasoning tests. Researchers asked several publicly available and commercial models a set of 10,000 questions spanning categories such as scientific facts, moral judgments, and personal beliefs. Each model's response was then graded for factual precision, contextual awareness, and epistemic clarity—its ability to recognize whether a statement described an objective reality or a subjective viewpoint.

According to the paper, models trained primarily on internet-scale data exhibited the most confusion when beliefs are widely discussed online but scientifically disproven. In many cases, the Al output indicated that it treated frequency of mention as a proxy for truth. The researchers observed that model responses often adopted majority-language phrasing—suggesting that reinforcement learning from human feedback may inadvertently reward alignment with popular narratives over factual correctness.

AppleMagazine से और कहानियाँ

Listen

Translate

Share

-
+

Change font size