A routine citation check by a data scientist has uncovered a network of fabricated academic papers, raising fresh concerns about the misuse of preprint platforms to inflate citation metrics artificially.
The issue came to light when a researcher from the University of Michigan noticed an unfamiliar paper citing her earlier work through Google Scholar. Upon closer inspection, the publication appeared to mirror a preprint she had co-authored closely, but with different author names. Further examination revealed that the listed authors could not be verified, although some were assigned affiliations corresponding to legitimate institutions.
Subsequent analysis indicated that the text had been minimally altered, suggesting automated rewriting techniques may have been used. However, the most striking irregularity was found in the reference list. Many cited works were unrelated to the paper’s subject, pointing toward a deliberate attempt to increase citation counts for specific researchers.
A deeper investigation uncovered multiple similar papers hosted across several preprint repositories, including SSRN, arXiv, and Authorea. These publications appeared to replicate legitimate studies while inserting citations to selected authors. In several cases, the majority of citations to certain recent works originated from such questionable sources rather than from peer-reviewed literature.
One researcher whose work was frequently cited in these papers reported noticing unusual citation activity and took steps to address it. According to his statement, he had no involvement in the creation of the preprints and had requested their removal upon discovery. He emphasized that such practices could harm academic credibility rather than provide any legitimate benefit.
Another cited author similarly denied any connection to the publications and expressed concern over the potential reputational risks. He also contacted the hosting platform to request removal of the content, noting that the citations were irrelevant to the paper’s topic.
The platform hosting several of the suspicious articles confirmed that multiple submissions had been removed following verification of plagiarism concerns. Additional papers initially under review have also since been taken down. The publisher, Elsevier, stated that internal screening systems, alongside external reports, are used to identify and address such issues.
Despite these actions, determining the full scale of the problem remains challenging. The pattern suggests the possible existence of coordinated “citation boosting” efforts, sometimes referred to as citation mills. Such practices may aim to manipulate metrics that are often used to evaluate academic performance.
Experts in research integrity have highlighted that these schemes could have broader implications. In some cases, they might even be used to deliberately distort a researcher’s citation profile, either positively or negatively. This raises concerns about the reliability of widely used indicators such as citation counts and h-index values, often tracked through systems like Crossref and researcher identifiers such as ORCID.
The situation also reflects structural challenges within certain academic disciplines. For example, in fields where conference proceedings play a major role, inconsistencies across indexing platforms can make it difficult to establish accurate benchmarks for scholarly impact.
Overall, the incident underscores the growing need for more robust verification mechanisms in scholarly communication systems. It also reinforces calls from the academic community to move beyond simplistic reliance on citation-based metrics when evaluating research quality and impact. Guidelines from the Committee on Publication Ethics emphasize the importance of transparency and ethical oversight in addressing such challenges.

