
Pune, India | November 27, 2025
A recent healthcare report by consulting giant Deloitte has attracted serious criticism due to allegedly citing non-existent academic studies. A Canadian news outlet revealed that a provincial government in Canada commissioned the nearly US $1.6 million report, which contains references that researchers cannot locate in any credible publication.
The 526-page report, delivered in May 2025 to Newfoundland and Labrador’s government, addressed urgent challenges such as virtual healthcare delivery, workforce shortages, and the pandemic’s ongoing effect on frontline health workers. However, investigations uncovered at least four citations that do not match any known journal article or academic paper. Even worse, some references credited real researchers who denied involvement, while others cited authors who simply do not exist.
Experts call these alleged “hallucinations,” a term describing when AI tools generate plausible but false content. This discovery raises significant concerns about the dependability of reports that guide public-policy decisions. A health-policy analyst remarked, “When reports depend on fabricated evidence, public trust erodes, and misallocation of crucial resources becomes likely.”
In response, Deloitte Canada issued a statement defending its work. The firm emphasized that it “firmly stands behind the recommendations in our report.” While admitting that certain citation errors might exist, Deloitte argued that these mistakes do not affect the report’s core findings. The company explained that AI was “used selectively to support a small number of research citations,” not to write the entire report.
Nevertheless, critics remain skeptical about Deloitte’s reassurances. Many experts insist that any AI use in critical research requires thorough human oversight and verification. Without such diligence, reports risk presenting fabricated data as factual information, which is especially dangerous in sensitive areas like healthcare.
This controversy is not unique for Deloitte. Last month, its Australian branch admitted similar errors in a government-commissioned welfare system report. That study contained fabricated academic sources and even a fictitious court quote. After exposure, Deloitte refunded part of the contract fee to the Australian government.
These recent incidents highlight bigger questions about the growing use of generative AI in high-stakes consulting and policy-making. Although AI offers speed and efficiency, its unchecked use can produce unreliable conclusions. Observers warn that consulting firms must enforce strict verification processes or face irreversible damage to public trust.
So far, the Canadian government has not publicly demanded a refund nor launched a formal investigation into the report. The disputed document remains available on a public government website, despite many critics arguing it should be removed until independent verification is complete.
Ultimately, this developing scandal underscores one crucial fact: consultancy and public policy depend heavily on accuracy. When AI-generated content goes unchecked, even expensive and high-profile reports can collapse, eroding institutional trust and raising the stakes for future AI-assisted research.