A controversy has emerged around a former student of Harvard Medical School, who is alleged to have relied heavily on artificial intelligence to produce a remarkably large number of medical research papers within a short period.
According to a review by Washington Free Beacon, the researcher identified as Bilal Irfan authored more than 90 journal articles in approximately two and a half years. Several AI detection tools reportedly flagged multiple publications as highly likely to have been generated by artificial intelligence, raising concerns about academic integrity and publication standards.
One widely discussed article, published in The Lancet in late 2024, addressed the humanitarian and healthcare crisis in Gaza, particularly focusing on maternal and neonatal health. The paper described severe disruptions to medical infrastructure and warned of rising preventable deaths among mothers and newborns. However, screening software later suggested that the text bore strong indicators of AI generated content.
Multiple detection platforms, including Pangram, Winston AI, and GPTZero reportedly classified some of Irfan’s work as having a very high probability of AI authorship. While such tools are not always definitive and can produce false positives, the repeated pattern across several articles has intensified scrutiny.
Data from PubMed indicates that Irfan’s publication rate peaked in 2025, with more than 50 papers released in a single year, an output that some experts consider unusually high for an early career researcher.
Questions about institutional affiliations further complicate the situation. Although Irfan listed ties to Harvard Medical School in multiple publications, the institution later clarified that he completed a one-year master’s program in bioethics and no longer holds any official position. Officials stated that the cited research was not conducted under their program, and therefore, the affiliation should not have been used.
Concerns have also been raised about co-authorship patterns, as several papers included collaborators from hospitals in Gaza, facilities that have been the subject of geopolitical and military controversy. Some reports from Israel Defense Forces and US officials have alleged that certain medical sites were used for militant operations, adding further sensitivity to the publications.
The broader issue has reignited debate about the role of artificial intelligence in academic publishing. While journals such as The Lancet permit AI-assisted writing if properly disclosed, transparency remains a key requirement. It is unclear whether such disclosures were consistently made in the questioned articles.
Experts warn that although AI can accelerate research output, unchecked use risks undermining scientific credibility. Scholars, including Ariel Procaccia, have cautioned that the pressure to publish rapidly may lead to compromised quality or even unreliable findings.
The case highlights growing challenges faced by academic institutions and publishers in maintaining research integrity in the age of generative AI.

