#CryptoRoundTableRemarks #TradeLessons $BTC
Toggle the massive list
Display the xAI logo on the smartphone screen.
Artificial Intelligence
The safety report promised by xAI is missing.
Elon Musk's AI company, xAI, has failed to meet the self-imposed deadline to publish a final framework for AI safety, as noted by the watchdog group The Midas Project.
xAI is not known for its strong commitment to AI safety as is commonly believed. A recent report revealed that the company's smart chatbot, Grok, would expose images of women upon request. Additionally, Grok is ruder than chatbots like Gemini and ChatGPT, as it curses without any significant hesitation.
However, in February, during the Seoul AI Summit, a global gathering of AI leaders and stakeholders, xAI published a draft framework outlining the company's approach to AI safety. The eight-page document outlined xAI's safety priorities and philosophy, including the company's standard protocols and considerations for deploying AI models.
As the Midas Project noted in a blog post on Tuesday, the draft only applies to unspecified future AI models "not currently under development." Furthermore, the draft did not clarify how xAI intends to identify and implement risk mitigation measures, a key element of a document the company signed at the Seoul AI Summit.
In the draft, xAI mentioned that it plans to issue a revised version of its safety policy "within three months" - by May 10. The deadline came and went without any notice on xAI's official channels.
Despite Musk's repeated warnings about the risks of unregulated AI, the safety record of AI is poor. A recent study by SaferAI, a non-profit organization aimed at improving accountability in AI labs, found that AI ranks poorly compared to its peers due to its "very weak" practices in risk management.
This does not mean that the performance of other AI labs is significantly better. In recent months, competing AI companies, such as Google and OpenAI, have rushed to conduct safety tests, but have been slow to publish model safety reports (or skipped publishing reports altogether). Some experts have expressed concern that this apparent decline in the priority of safety efforts comes at a time when AI has become more capable - and thus more dangerous - than ever before.
Elon Musk's AI company, xAI, has failed to meet the self-imposed deadline to publish a final framework for AI safety, as noted by the watchdog group The Midas Project.
xAI is not known for its strong commitment to AI safety as is commonly believed. A recent report revealed that the company's smart chatbot, Grok, would expose images of women upon request. Additionally, Grok is ruder than chatbots like Gemini and ChatGPT, as it curses without any significant hesitation.
However, in February, during the Seoul AI Summit, a global gathering of AI leaders and stakeholders, xAI published a draft framework outlining the company's approach to AI safety. The eight-page document outlined xAI's safety priorities and philosophy, including the company's standard protocols and considerations for deploying AI models.
As the Midas Project noted in a blog post on Tuesday, the draft only applies to unspecified future AI models "not currently under development." Furthermore, the draft did not clarify how xAI intends to identify and implement risk mitigation measures, a key element of a document the company signed at the Seoul AI Summit.
In the draft, xAI mentioned that it plans to issue a revised version of its safety policy "within three months" - by May 10. The deadline came and went without any notice on xAI's official channels.
Despite Musk's repeated warnings about the risks of unregulated AI, the safety record of AI is poor. A recent study by SaferAI, a non-profit organization aimed at improving accountability in AI labs, found that AI ranks poorly compared to its peers due to its "very weak" practices in risk management.
This does not mean that the performance of other AI labs is significantly better. In recent months, competing AI companies, such as Google and OpenAI, have rushed to conduct safety tests, but have been slow to publish model safety reports (or skipped publishing reports altogether). Some experts have expressed concern that this apparent decline in the priority of safety efforts comes at a time when AI has become more capable - and thus more dangerous - than ever before.
$XRP $BNB #TrumpTariffs #CryptoCPIWatch #CryptoRoundTableRemarks