Key Takeaways
- AI companies like OpenAI and Anthropic are shifting towards military applications, lifting prior restrictions and securing defense contracts.
- This pivot is driven by the unprofitability of current AI development, with companies seeking government subsidies and de-risking portfolios via military contracts.
- Concerns exist about the security of commercial AI models for military use, citing vulnerabilities to data poisoning and 'sleeper agents' from supply chain weaknesses.
- Once deployed, companies largely lose control over military AI applications, which are then governed by international law rather than company terms of service.
- The AI industry is redefining 'safety' to accelerate deployment in high-risk scenarios, bypassing stringent defense regulations and potentially leading to less safe systems.
- Focus on hypothetical AI existential risks distracts from addressing immediate, real-world harms, undermining effective regulation and preparedness.
Deep Dive
- OpenAI removed its ban on military use in January 2024.
- The company subsequently secured a $200 million Department of Defense contract and partnered with Anduril.
- Other firms like Anthropic are also pursuing defense contracts and partnerships.
- This shift coincided with events like increased conflict in Gaza, suggesting a change in corporate priorities.
- AI companies' pivot to military applications is convenient, given the current unprofitability of AI development.
- Government subsidies and military contracts are being used to de-risk portfolios.
- This trend is facilitated by the narrative of an AI arms race with China.
- Many AI companies, including OpenAI, Anthropic, and XAI, are pre-profit and seek revenue and regulatory stability through defense contracts.
- Military procurement is a multi-year, strict process, distinct from commercial contracts.
- It involves specific government requests, proposal submissions, and rigorous testing for accuracy and security.
- Requirements include air-gapped systems and traceable supply chains.
- Nation-states define terms and seek lucrative opportunities, with access potentially taking years.
- U.S. military AI products raise security concerns compared to commercial models, potentially having looser guardrails and trained on classified information.
- Government-focused AI models, like Anthropic's Claude Gov, are designed with fewer restrictions and process classified data, but are not necessarily more secure.
- Commercial models trained on publicly available data are vulnerable to poisoning attacks and 'sleeper agents.'
- Reinforcement learning's reliance on low-paid workers in developing nations creates a vulnerability for backdoors during fine-tuning.
- Inherent vulnerabilities in training data and fine-tuning make commercial AI models unsafe for military use, despite traditional air-gapping.
- Once deployed by the military, AI companies lose control over the application of their AI systems.
- Militaries do not adhere to terms of service in procurement; instead, international law and nation-states dictate use.
- Companies are often aware of how their systems are used through procurement documents detailing data usage and storage needs.
- Microsoft's work with the IDF is cited as an example of closer company involvement in technical support.
- The AI industry is engaging in 'safety revisionism,' redefining terms like alignment and existential risks.
- This redefinition aims to accelerate deployment in high-risk scenarios and bypass stringent defense sector regulations.
- The focus on hypothetical existential risks serves as a pretext for an AI arms race, while ignoring immediate risks like surveillance.
- Ceding the definition of safety to AI companies allows them to set risk thresholds, potentially undermining democratic norms and existing safety standards.
- This redefinition, framed as necessary for an AI arms race against China, may actually lead to less safe systems.
- Safety percentages in safety-critical systems like nuclear power plants emphasize reliability and availability measures.
- Nuclear plants typically require reliability rates of 99% to 99.999%, with mitigation plans for system failures.
- Safety thresholds vary significantly by use case; for example, a commercial airplane crash (300 deaths) is a benchmark different from nuclear plant accidents' broader impacts.
- A universal safety standard for AI is problematic due to these varying harm thresholds.
- Heidy Khlaaf departed OpenAI due to its focus on generalized AI safety and alignment, which she views as disconnected from current, real-world harms in safety-critical applications like defense.
- Focusing on hypothetical existential risks distracts from addressing present-day harms.
- Regulatory efforts focused on hypothetical, unquantifiable AI risks are criticized as ineffective, equivalent to having no regulation.
- Addressing today's risks is crucial for preparedness for future hypothetical risks, as current frameworks build upon each other.