Sign up today

Sign up today
Softphone APP for Android &IOS

RG Richardson Communications News

I am a business economist with interests in international trade worldwide through politics, money, banking and VOIP Communications. The author of RG Richardson City Guides has over 300 guides, including restaurants and finance.

eComTechnology Posts

The Court That Let Democracy Bleed

The Court That Let Democracy Bleed MeidasTouch Network and Michael Cohen Jul 15, 2025 Guest article by Michael Cohen In a chilling, unsigne...

AI models are using material from retracted scientific papers | MIT Technology Review

AI models are using material from retracted scientific papers | MIT Technology Review


AI models are using material from retracted scientific papers


Some companies are working to remedy the issue.
By Ananyaarchive page
September 23, 2025
Stephanie Arnett/MIT Technology Review | Adobe Stock, Getty Images

Some AI chatbots rely on flawed research from retracted scientific papers to answer questions, according to recent studies. The findings, confirmed by MIT Technology Review, raise questions about how reliable AI tools are at evaluating scientific research and could complicate efforts by countries and industries seeking to invest in AI tools for scientists.

AI search tools and chatbots are already known to fabricate links and references. But answers based on the material from actual papers can mislead as well if those papers have been retracted. The chatbot is “using a real paper, real material, to tell you something,” says Weikuan Gu, a medical researcher at the University of Tennessee in Memphis and an author of one of the recent studies. But, he says, if people only look at the content of the answer and do not click through to the paper and see that it’s been retracted, that’s really a problem.


Gu and his team asked OpenAI’s ChatGPT, running on the GPT-4o model, questions based on information from 21 retracted papers on medical imaging. The chatbot’s answers referenced retracted papers in five cases but advised caution in only three. While it cited non-retracted papers for other questions, the authors note it may not have recognized the retraction status of the articles. In a study from August, a different group of researchers used ChatGPT-4o mini to evaluate the quality of 217 retracted and low-quality papers from different scientific fields; they found that none of the chatbot’s responses mentioned retractions or other concerns. (No similar studies have been released on GPT-5, which came out this August.)

The public uses AI chatbots to ask for medical advice and diagnose health conditions. Students and scientists increasingly use science-focused AI tools to review existing scientific literature and summarize papers. That kind of usage is likely to increase. The US National Science Foundation, for instance, invested $75 million in building AI models for science research this August.

No comments:

Post a Comment