The entry of Musk’s two startups into AI-based weapons development marks a new and potentially controversial turn for the billionaire, according to the outlet. While SpaceX has long been a defense contractor and Musk is an outspoken supporter of artificial intelligence, he has previously opposed the creation of “new tools for killing people.”
In 2015, Musk signed an open letter warning about the dangers of autonomous weapons, calling for a ban on systems capable of independently selecting targets and operating without meaningful human control.
Project details
A limited number of companies have been selected to participate in the six-month tender, which carries a budget of $100 million. The goal is to develop advanced “swarm technology” capable of converting voice commands into digital instructions and coordinating multiple drones simultaneously.
While the ability to coordinate several unmanned systems already exists, developing software that can pilot dozens of drones at the same time remains a significant technical challenge. The project will unfold in five stages, starting with software development and ending with real-world testing. According to Bloomberg, the drones are intended for offensive use.
xAI has begun actively hiring engineers in Washington, D.C., and on the U.S. West Coast who hold active security clearances to work with federal contractors.
In January, xAI’s chatbot Grok was integrated into the Pentagon’s network. U.S. Defense Secretary Pete Hegseth said the department would provide “all necessary data” from military IT systems, including intelligence information.
Previously, the U.S. Department of Defense allocated $200 million to Anthropic, Google, OpenAI, and xAI to develop AI solutions for national security. The Pentagon’s Chief Digital and AI Office said the funding would accelerate the deployment of advanced neural-network capabilities in defense tools.
OpenAI is also involved
Although SpaceX has long been a defense contractor, its focus has traditionally been on reusable rockets and satellites for space exploration, military communications, and intelligence—not offensive weapons software.
SpaceX is not the only participant. OpenAI is supporting a bid by Applied Intuition, but its role will be limited to a “mission command center” component that translates commanders’ voice instructions into digital commands. OpenAI’s technology will not be used to control drones, integrate weapons, or select targets.
Pentagon escalates AI use
The Pentagon is intensifying its use of artificial intelligence on the battlefield. In January, it released a new AI strategy envisioning the use of AI agents for tasks ranging from operational planning to targeting, potentially including lethal strikes.
In February, media reported that Anthropic’s Claude was used in an operation targeting Venezuelan President Nicolás Maduro. The Pentagon is now reviewing its contract with the startup due to disagreements over Anthropic’s strict ethics policy, which bans mass surveillance and fully autonomous lethal operations.
“Our country needs partners who are willing to help warfighters win any war,” Pentagon spokesperson Sean Parnell said.
The U.S. military is pressuring four major AI companies to allow their technologies to be used for “all lawful purposes,” including weapons development, intelligence gathering, and combat operations. Anthropic has refused to lift restrictions on domestic surveillance and autonomous weapons, leading talks to stall. Replacing Claude quickly would be difficult due to its technological advantages in certain specialized government tasks.
In addition to Anthropic’s chatbot, the Pentagon uses OpenAI’s ChatGPT, Google’s Gemini, and xAI’s Grok for unclassified tasks. All three have agreed to relax restrictions that apply to ordinary users.
Discussions are now underway about moving large language models into classified environments and using them “for all lawful purposes.” One of the three companies has already agreed, while the other two are reportedly showing greater flexibility than Anthropic.
Axios reports that Hegseth is “close” to severing ties with Anthropic and designating the company a “supply chain risk,” which would require any firm working with the U.S. military to stop using Claude.
“This is going to be incredibly difficult to unwind. We will make sure they pay the price for forcing us to take this step,” a senior Pentagon official said.
If Anthropic is labeled a supply-chain risk, Pentagon contractors would have to certify that they do not use Claude in their operations—a move likely to affect many companies. Previously, Anthropic CEO Dario Amodei said that eight of the ten largest U.S. firms use the chatbot.
The involvement of SpaceX and xAI highlights the rapid acceleration of AI militarization and the Pentagon’s growing push to deploy autonomous systems in combat operations. Elon Musk’s participation appears controversial given his earlier public opposition to autonomous weapons. The situation underscores a deepening conflict between ethical constraints imposed by AI developers and the strategic priorities of the U.S. military.
ES
EN