Deloitte Faces Scrutiny Over AI Errors in Government Report

Deloitte
  • Deloitte will partially refund AU$440,000 after its report to the Australian government was found to contain AI-generated inaccuracies.

AI Use in Official Report Sparks Controversy

Deloitte Australia has agreed to partially refund the Australian government for a report that contained multiple factual inaccuracies. The 237-page document, commissioned by the Department of Employment and Workplace Relations, was found to include fabricated quotes and references. A revised version was published after legal scholar Chris Rudge raised concerns about the report’s reliability. The department confirmed that some footnotes and citations were incorrect, prompting Deloitte to repay the final installment of its contract.

Although the firm did not clarify whether the errors stemmed from artificial intelligence, the updated report disclosed that Azure OpenAI was used during its preparation. Among the removed content were quotes falsely attributed to a federal judge and references to academic works that do not exist. Rudge identified around 20 errors, including a claim that a professor authored a book outside her field—an assertion he immediately recognized as implausible. The department stated that the report’s core findings and recommendations remain unchanged.

Academic Concerns and Legal Implications

Rudge, a researcher at Sydney University, emphasized the seriousness of misquoting legal sources in a document intended to assess departmental compliance. He noted that citing academic work without proper review undermines the credibility of such reports. The inclusion of a fabricated judicial quote raised broader concerns about the integrity of the legal analysis. According to Rudge, the issue goes beyond academic misrepresentation and touches on the risk of misleading government policy.

Senator Barbara Pocock of the Australian Greens criticized Deloitte’s handling of the report and called for a full refund of the AU$440,000 fee. She argued that the misuse of AI in this context was unacceptable, especially given the nature of the errors. Misquoting a judge and citing nonexistent sources, she said, would be grounds for serious consequences in academic settings. Deloitte responded by stating the matter had been resolved directly with the client but did not address the AI-related questions.

Broader Implications for AI in Public Sector Work

The incident highlights growing concerns about the use of generative AI in official and legal contexts. While AI tools can streamline content creation, they are also prone to hallucination—producing plausible but false information. In this case, the technology’s limitations were exposed in a report that was meant to evaluate automated penalties in Australia’s welfare system. The department maintained that the substance of the report was intact, but the reputational impact remains.

As AI becomes more integrated into public sector workflows, the need for rigorous oversight and verification grows. The Deloitte case may prompt other agencies to reassess how AI-generated content is reviewed before publication. Transparency about the use of such tools is essential, especially when reports influence policy or legal compliance. Interestingly, this is one of the first known instances where a government contractor has issued a refund due to AI-related errors in a formal document.


 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.