Home » xAI’s promised safety report is MIA

xAI’s promised safety report is MIA

by Kylie Bower


Elon Musk’s AI company, xAI, has missed a self-imposed deadline to publish a finalized AI safety framework, as noted by watchdog group The Midas Project.

xAI isn’t exactly known for its strong commitments to AI safety as it’s commonly understood. A recent report found that the company’s AI chatbot, Grok, would undress photos of women when asked. Grok can also be considerably more crass than chatbots like Gemini and ChatGPT, cursing without much restraint to speak of.

Nonetheless, in February at the AI Seoul Summit, a global gathering of AI leaders and stakeholders, xAI published a draft framework outlining the company’s approach to AI safety. The eight-page document laid out xAI’s safety priorities and philosophy, including the company’s benchmarking protocols and AI model deployment considerations.

As The Midas Project noted in the blog post on Tuesday, however, the draft only applied to unspecified future AI models “not currently in development.” Moreover, it failed to articulate how xAI would identify and implement risk mitigations, a core component of a document the company signed at the AI Seoul Summit.

In the draft, xAI said that it planned to release a revised version of its safety policy “within three months” — by May 10. The deadline came and went without acknowledgement on xAI’s official channels.

Despite Musk’s frequent warnings of the dangers of AI gone unchecked, xAI has a poor AI safety track record. A recent study by SaferAI, a nonprofit aiming to improve the accountability of AI labs, found that xAI ranks poorly among its peers, owing to its “very weak” risk management practices.

That’s not to suggest other AI labs are faring dramatically better. In recent months, xAI rivals including Google and OpenAI have rushed safety testing and have been slow to publish model safety reports (or skipped publishing reports altogether). Some experts have expressed concern that the seeming deprioritization of safety efforts is coming at a time when AI is more capable — and thus potentially dangerous — than ever.



Source link

Related Posts

Leave a Comment