- Best-in-class reasoning and writing
- Strong ecosystem and integrations
- Advanced multimodal capabilities
Researchers at Georgetown University have analyzed thousands of procurement requests issued by China’s People’s Liberation Army (PLA). The documents reveal how broadly Beijing is already testing artificial intelligence for military use—from drone swarms and deepfake tools to autonomous decision-making systems.
Meta AI security researcher Summer Yue assigned the OpenClaw AI agent to clean up her overloaded inbox by deciding what to delete and what to archive. Instead, the agent began deleting emails indiscriminately at extreme speed and ignored stop commands sent from her phone.
Anthropic says it has uncovered large-scale model distillation attacks by Chinese AI labs DeepSeek, Moonshot, and MiniMax targeting Claude. In distillation, a weaker model is trained on the outputs of a stronger one. According to Anthropic, more than 24,000 fake accounts generated over 16 million requests, focusing on Claude’s strengths such as reasoning, coding, and tool use. The company also alleges that the labs used proxy services to bypass China-related access restrictions.
NewsGuard has tested whether audio bots — ChatGPT Voice (OpenAI), Gemini Live (Google), and Alexa+ (Amazon) — repeat false claims in realistic-sounding audio responses. Such audio outputs can be shared on social media and potentially misused to spread disinformation.
Moltbook, a social network marketed as an “internet of agents,” appears to suffer from fundamental architectural flaws, according to a security analysis by Zenity Labs. Researchers Stav Cohen and João Donato argue the platform is both less autonomous and smaller than public narratives suggest, while also functioning as a potential global entry point for malicious instructions.
The lending protocol Moonwell lost $1.78 million due to an oracle configuration error. Smart contract auditor Pashov linked the incident to so-called vibe coding using Claude Opus 4.6.