- Best-in-class reasoning and writing
- Strong ecosystem and integrations
- Advanced multimodal capabilities
OpenAI is reportedly developing a new cybersecurity product that will be made available only to a small group of companies, according to Axios.
Meta AI security researcher Summer Yue assigned the OpenClaw AI agent to clean up her overloaded inbox by deciding what to delete and what to archive. Instead, the agent began deleting emails indiscriminately at extreme speed and ignored stop commands sent from her phone.
Anthropic says it has uncovered large-scale model distillation attacks by Chinese AI labs DeepSeek, Moonshot, and MiniMax targeting Claude. In distillation, a weaker model is trained on the outputs of a stronger one. According to Anthropic, more than 24,000 fake accounts generated over 16 million requests, focusing on Claude’s strengths such as reasoning, coding, and tool use. The company also alleges that the labs used proxy services to bypass China-related access restrictions.
NewsGuard has tested whether audio bots — ChatGPT Voice (OpenAI), Gemini Live (Google), and Alexa+ (Amazon) — repeat false claims in realistic-sounding audio responses. Such audio outputs can be shared on social media and potentially misused to spread disinformation.
Moltbook, a social network marketed as an “internet of agents,” appears to suffer from fundamental architectural flaws, according to a security analysis by Zenity Labs. Researchers Stav Cohen and João Donato argue the platform is both less autonomous and smaller than public narratives suggest, while also functioning as a potential global entry point for malicious instructions.
The lending protocol Moonwell lost $1.78 million due to an oracle configuration error. Smart contract auditor Pashov linked the incident to so-called vibe coding using Claude Opus 4.6.
SpaceX, Elon Musk’s aerospace company, and its recently established AI subsidiary xAI are taking part in a new classified Pentagon competition to develop autonomous, voice-controlled drone swarms, Bloomberg reports, citing sources.
Hollywood organizations have sharply criticized the AI video generator Seedance 2.0, calling it a tool that has “rapidly become a vehicle for blatant copyright infringement.”
The U.S. Army used Anthropic’s Claude in an operation to capture Venezuelan President Nicolás Maduro, The Wall Street Journal reports, citing sources.
ByteDance’s new video model, Seedance, is so powerful that it is allegedly committing large-scale copyright violations — and Hollywood is reacting quickly.