Intensive Care Journal Retracts AI-Focused Letter After Discovery of Fabricated Citations

HomeArticles

Intensive Care Journal Retracts AI-Focused Letter After Discovery of Fabricated Citations

A leading critical care journal has withdrawn a letter discussing artificial intelligence applications in intensive care after determining that multiple references cited in the article did not exist.

The 750-word correspondence, published in Intensive Care Medicine, explored how AI systems might assist clinicians in monitoring hemodynamic parameters in critically ill patients. The journal, published by Springer Nature on behalf of the European Society of Intensive Care Medicine, later issued an official retraction notice stating that concerns had been raised regarding nonexistent references in the article. The notice is available on the publisher’s platform.

According to the retraction statement, the authors indicated that the inaccurate citations resulted from the use of generative AI to convert PubMed identifiers into a formatted reference list. As stated in the notice, the editor-in-chief concluded that confidence in the reliability of the article’s content had been compromised. The retraction further noted that the peer review process “had not been carried out in accordance with the journal’s editorial policies.”

A prior editor’s note had been added to the article acknowledging that concerns about references were under review. The journal is indexed in PubMed.

An examination of the reference list revealed that while a small number of cited articles could be located, albeit with discrepancies in publication year, author order, or pagination, many could not be traced in the cited journals or elsewhere in the scientific literature. One reference was attributed to an article purportedly published in Intensive Care Medicine itself; however, no such publication could be identified in the journal’s archive or bibliographic databases.

In a statement reported by the publisher, Springer Nature indicated that concerns were first brought to its attention in early 2025. The publisher acknowledged that investigations may take time but emphasized its commitment to maintaining the integrity of the scholarly record.

The corresponding author’s institution stated that AI had been used only within the scope permitted by the journal’s author guidelines. Those guidelines specify that large language models may be used for AI-assisted copy editing, defined as improving readability, grammar, or formatting, without formal disclosure, provided that authors retain full responsibility for the final content. The author guidelines are accessible via the journal’s submission portal.

The institution suggested that a correction might have addressed the citation inaccuracies. However, the journal’s decision to retract the article was also influenced by findings related to deviations from standard peer review procedures.

This case highlights broader concerns regarding the use of generative AI tools in manuscript preparation. While formatting assistance is widely accepted under defined editorial policies, automated reference generation or transformation can introduce fabricated or altered citations if not rigorously verified by authors. The situation underscores the principle that authors remain accountable for the accuracy and authenticity of references, regardless of the tools employed during manuscript preparation.

In recent years, similar incidents involving fabricated or unverifiable citations have prompted journals and publishers to strengthen guidance on AI use and editorial oversight. For publishers and editors, the episode reinforces the need for systematic reference validation and transparent peer review practices. For authors, it serves as a reminder that AI-assisted formatting does not transfer responsibility for bibliographic accuracy.