Microsoft Admits Supplying AI to Israeli Military

In a rare public acknowledgment, Microsoft confirmed this week that it has supplied artificial intelligence and cloud services to the Israeli military during the ongoing war in Gaza — but insisted there’s no evidence its technologies were used to harm civilians.
In a corporate blog post published Thursday, the company said it provided Azure cloud infrastructure, AI-powered language translation, professional services, and cybersecurity assistance to support Israel’s efforts to locate and rescue hostages taken by Hamas during the October 7, 2023, attack. The tech giant, however, maintained it had found “no evidence” that its AI models or Azure platform were deployed to target or harm Palestinians in Gaza.
The announcement comes in the wake of mounting scrutiny from both human rights groups and Microsoft employees over the company’s role in military AI applications. An investigation earlier this year by The Associated Press revealed Microsoft’s previously undisclosed partnership with the Israeli Ministry of Defense, noting a 200-fold surge in military use of commercial AI products following the October attacks.
Support to the Israeli army was “under significant supervision“ and “limited“
While Microsoft emphasized that its support to the Israeli military came with “significant oversight” and was “limited in scope,” it admitted granting “special access beyond commercial agreements” to assist in hostage rescue operations. The company has yet to release details of what that entailed, nor has it disclosed whether it directly engaged with the Israeli military during its internal review.
The tech firm also conceded it lacks visibility into how its software is ultimately used when deployed on customer-owned servers or through third-party cloud providers. This caveat leaves significant blind spots in ensuring its AI and cloud services are not weaponized in ways that breach international law or corporate ethics.
Microsoft’s Acceptable Use Policy and AI Code of Conduct prohibit its products from being used to cause harm or violate human rights, but enforcing these standards in active warzones remains a complex and largely opaque process.
“We are in a remarkable moment where a company, not a government, is dictating terms of use to a government engaged in conflict,” said Emelia Probasco, a senior fellow at Georgetown University’s Center for Security and Emerging Technology. The situation underscores growing tensions over the militarization of AI and the accountability of tech companies supplying dual-use technologies.
The controversy also sparked internal dissent. A group called No Azure for Apartheid, made up of current and former Microsoft employees, is demanding the public release of the company’s full investigative report. Critics argue Microsoft’s statement is more a PR move than a meaningful act of transparency, as key details about how AI was operationally applied remain undisclosed.
Contextual Note
|