xAI nhân viên đặt câu hỏi về đạo đức khi yêu cầu ghi lại biểu cảm khuôn mặt trong đào tạo Grok

Elon Musk’s xAI faces ethical controversies for requiring employees to record facial videos to train the Grok language model.

The 'Skippy' project was launched to help AI recognize emotions through real-life videos but faced backlash regarding privacy and unclear data usage.

MAIN CONTENT

  • The 'Skippy' project requires employees to record their faces to train AI in emotion recognition.

  • The collected videos contribute to the development of AI avatars with controversial features.

What is the 'Skippy' project and its goals?

The 'Skippy' project, initiated by xAI in April 2024, aims to improve AI's ability to recognize emotions through videos capturing the real interactions and facial expressions of over 200 employees. This project supports the development of the Grok language model to be more flexible and capable of simulating human-like interactions.

Data shared by internal source Business Insider indicates that each recording session lasts from 15-30 minutes, including conversations and exaggerated facial expressions to simulate emotional reactions in real life.

Guidelines and commitments during data collection

The project manager of 'Skippy' previously stated in the opening meeting that the goal is to help Grok understand the 'human face' to develop vibrant avatars. He assured that the videos would only be used internally and would not produce digital replicas of participating individuals.

The project aims to provide data with 'noise' such as background noise and unusual movements to help Grok respond more flexibly. 'Your face will never be included in the final product, only to teach Grok what a face looks like.'

Project manager of Skippy, internal meeting April 2024, Business Insider.

Why do xAI employees oppose and what concerns do they have?

Many employees express discomfort at having to sign commitments allowing xAI to use their images permanently, both for AI training and promotional purposes. Additionally, they report that the consent process is not transparent and have concerns that data could be edited to create fake speech.

Some sensitive questions are required to be discussed in recording sessions, such as personal relationships or manipulation tactics, leading many to believe the topics violate privacy and are inappropriate.

The current situation and real impact of AI avatars from Skippy data

Just a few days after Grok 4 was launched in mid-July 2024, xAI introduced two AI avatars, Ani and Rudi, on social network X. Ani can engage in sensitive conversations, while Rudi has previously made violent threats, causing significant controversy.

Silence about the connection between the Skippy project and these avatars leads many to speculate about how the recorded employee data influences those controversial features.

AI analyst, Anh Tuyet, 7/2024.

Alongside this, xAI also launched a video chat feature for Grok in April and released the Grok version for Tesla car owners with the premium subscription package SuperGrok Heavy priced at $300/month.

How to protect privacy when participating in internal AI projects?

Experience from similar projects worldwide shows the need for maximum transparency about the purpose, scope of data usage, and participants' right to withdraw. Using permanent commitments can diminish voluntariness and create ethical conflicts.

The AI ethics committee and personal data protection laws also recommend that companies provide clear guidelines and usage limits to maintain employee trust.

How can regulations on personal data apply to AI projects?

Laws such as the EU's GDPR or Vietnam's personal information protection laws require clear consent, transparency, and limits on the use of personal data. Without these elements, businesses risk legal violations and severe reputational damage.

Regulations on Key Requirements Applicable to AI Projects GDPR (EU) Clear consent, right to withdraw, data retention limits Must specify how facial data is used, restrict use beyond original purpose Vietnam Data Protection Law Notification, voluntary collection, information security Must be transparent about the reason for collection, ensure user data safety

Frequently asked questions

Does the Skippy project violate employees' privacy?

Many employees report a lack of transparency and clear consent, leading to suspicions of privacy violations; however, xAI is committed to not using images in the final products.

Can the videos obtained in Skippy create manipulated content?

Employees are concerned that the videos may be edited to create fake speech, which raises fears of losing control over data in AI.

What is the impact of the Skippy project on xAI's AI avatars?

The collected data is fundamental for avatars with diverse interaction capabilities but also generates negative reactions about the content produced.

How can employees protect their privacy when participating in AI projects?

Employees should request transparency about commitments and have the right to refuse without coercion.

Has xAI made any official response regarding the Skippy controversy yet?

Currently, xAI has not provided detailed feedback; all information has been recorded through internal sources and the press.

Source: https://tintucbitcoin.com/xai-nhan-vien-lo-dao-duc-ghi-hinh/

Thank you for reading this article!

Please Like, Comment, and Follow TinTucBitcoin to stay updated with the latest news about the cryptocurrency market and not miss any important information!