Anthropic Sues Pentagon Over 'Supply Chain Risk' Label Amid Ethical AI Dispute
Artificial Intelligence (AI) firm Anthropic has filed two lawsuits against the Department of Defense (DoD), claiming that the Pentagon's 'supply chain risk' label is unlawful and violates its free speech and due process rights. The suits come amid a heated dispute between the AI company and the US military over the use of its technology, including the AI chatbot Claude, in military operations.
Background of the Dispute
Anthropic, a California-based AI start-up, has been caught in a months-long feud with the Pentagon over the implementation of safeguards against the military's potential use of its AI models for mass domestic surveillance or fully autonomous lethal weapons. Anthropic has consistently refused to allow unrestricted use of its AI tools by the U.S. military, a stance grounded in ethical principles123.
The Pentagon's decision to label Anthropic as a 'supply chain risk', a designation issued last Thursday, is the first instance of such blacklisting against a U.S. company45. This action follows Anthropic's refusal to comply with a deal deadline set by the DoD6.
About the Lawsuits
The lawsuits filed by Anthropic seek to prevent the Pentagon from placing it on a national security blacklist7. The AI firm has filed the suits in the northern district court of California and the US court of appeals for the Washington DC Circuit5. Anthropic argues that the 'supply chain risk' designation is unconstitutional and infringes on its freedom of speech and due process rights8.
Implications and Reactions
The clash between Anthropic and the Pentagon has raised concerns about the use of AI in war and the potential risks of autonomous weapons9. It has also sparked a debate about who should have control over powerful military technology10.
Elon Musk, CEO of SpaceX and Tesla, and founder of xAI, has also weighed into the dispute, responding to Anthropic CEO Dario Amodei's comments on AI consciousness with a terse He’s projecting
11.
The dispute has also highlighted the role of AI tools, such as Claude, in military operations. Reports suggest that Claude was used by the US military in the war on Iran to optimise target selection, analyse intelligence data, and issue precise location coordinates12.
Current Status
The legal battle between Anthropic and the Pentagon continues, as the AI firm seeks to reverse the 'supply chain risk' designation13. The outcome of this case could have significant implications for the use of AI in military operations and the control of powerful military technology.