
3:02 PM PDT · May 13, 2025
Elon Musk’s AI company, xAI, has missed a self-imposed deadline to people a finalized AI information framework, arsenic noted by watchdog radical The Midas Project.
xAI isn’t precisely known for its beardown commitments to AI information arsenic it’s commonly understood. A caller report recovered that the company’s AI chatbot, Grok, would undress photos of women erstwhile asked. Grok tin besides beryllium considerably much crass than chatbots similar Gemini and ChatGPT, cursing without overmuch restraint to talk of.
Nonetheless, successful February astatine the AI Seoul Summit, a planetary gathering of AI leaders and stakeholders, xAI published a draft framework outlining the company’s attack to AI safety. The eight-page papers laid retired xAI’s information priorities and philosophy, including the company’s benchmarking protocols and AI exemplary deployment considerations.
As The Midas Project noted successful a blog station connected Tuesday, however, the draught lone applied to unspecified aboriginal AI models “not presently successful development.” Moreover, it failed to articulate however xAI would place and instrumentality hazard mitigations, a halfway constituent of a papers the institution signed astatine the AI Seoul Summit.
In the draft, xAI said that it planned to merchandise a revised mentation of its information argumentation “within 3 months” — by May 10. The deadline came and went without acknowledgement connected xAI’s authoritative channels.
Despite Musk’s predominant warnings of the dangers of AI gone unchecked, xAI has a mediocre AI information way record. A caller survey by SaferAI, a nonprofit aiming to amended the accountability of AI labs, recovered that xAI ranks poorly among its peers, owing to its “very weak” hazard absorption practices.
That’s not to suggest different AI labs are faring dramatically better. In caller months, xAI rivals including Google and OpenAI person rushed information testing and person been slow to publish exemplary information reports (or skipped publishing reports altogether). Some experts person expressed interest that the seeming deprioritization of information efforts is coming astatine a clip erstwhile AI is much susceptible — and frankincense perchance unsafe — than ever.
Kyle Wiggers is TechCrunch’s AI Editor. His penning has appeared successful VentureBeat and Digital Trends, arsenic good arsenic a scope of gadget blogs including Android Police, Android Authority, Droid-Life, and XDA-Developers. He lives successful Manhattan with his partner, a euphony therapist.