The Pentagon has labeled an artificial intelligence company a national security supply chain risk, a move that could force contractors to stop using its technology, according to the Associated Press.
The Trump administration announced the designation against Anthropic, maker of the AI chatbot Claude, after the company resisted removing safeguards tied to surveillance and autonomous weapons use. Officials say the military must be able to use technology for all lawful purposes without vendor restrictions.
Anthropic says the decision isn’t legally sound and plans to challenge it in court. Critics, including U.S. Sen. Kirsten Gillibrand, argue the move misuses rules meant to protect against foreign adversaries.

