The European Union (EU) is leading the race to regulate artificial intelligence (AI). Earlier today, the European Council and the European Parliament concluded three days of negotiations with a provisional agreement on what will become the world’s first comprehensive AI regulatory agreement.
Carme Artigas, Spain's secretary of state for digitalization and artificial intelligence, called the agreement a "historic achievement" in a press release. Artigas said the rules strike an "extremely delicate balance" between encouraging safe and secure AI innovation and adoption across the EU and protecting citizens' "fundamental rights."
The draft legislation, the AI Act, was first proposed by the European Commission in April 2021. Parliament and EU member states will vote to approve it next year, but the rules will not come into force until 2025.
A risk-based approach to AI regulation
The AI Act is designed using a risk-based approach, where the higher the risk posed by an AI system, the stricter the rules. To achieve this, the regulation will classify AI to identify those that pose a “high risk.”
AIs deemed non-threatening and low-risk will be subject to “very light transparency obligations.” For example, such AI systems will be required to disclose that their content is AI-generated so that users can make informed decisions.
For high-risk accredited institutions, the legislation will add a number of obligations and requirements, including:
Human oversight: The bill requires a human-centered approach that emphasizes clear and effective human oversight mechanisms for high-risk AI systems. This means getting humans involved to actively monitor and supervise the operation of AI systems. Their role includes ensuring that the system operates as expected, identifying and addressing potential harms or unintended consequences, and ultimately being held accountable for their decisions and actions.
Transparency and explainability: Demystifying the inner workings of high-risk AI systems is critical to building trust and ensuring accountability. Developers must provide clear and accessible information about how their systems make decisions. This includes details about the underlying algorithms, training data, and potential biases that may have influenced the system’s output.
Data Governance: The AI Act emphasizes responsible data practices aimed at preventing discrimination, bias, and privacy violations. Developers must ensure that the data used to train and operate high-risk AI systems is accurate, complete, and representative. The principle of data minimization is critical, collecting only the information necessary for the system to function and minimizing the risk of misuse or damage. In addition, individuals must have clear rights to access, correct, and delete data used in AI systems, enabling them to control their information and ensure its use is ethical.
Risk Management: Proactive identification and mitigation of risks will become a key requirement for high-risk AI. Developers must implement a strong risk management framework to systematically assess systems for potential harms, vulnerabilities, and unintended consequences.
Prohibition of certain uses of AI
The regulation would outright ban the use of certain AI systems where the risks are deemed “unacceptable.” For example, facial recognition AI would be banned from public areas except for use by law enforcement.
The legislation would also ban the use of emotion recognition systems in areas such as schools and offices, as well as the scraping of images from surveillance footage and the internet.
Penalties and regulations to attract innovation
The AI Law will also impose penalties on companies that violate the law. For example, violating the law on prohibited AI applications will result in a fine of 7% of a company's global revenue, while companies that violate obligations and requirements will be fined 3% of their global revenue.
To promote innovation, the regulation will allow the testing of innovative AI systems under realistic conditions with appropriate safeguards.
While the EU has taken the lead in this race, the US, UK and Japan are also trying to introduce their own AI legislation. The EU's AI Act could serve as a global standard for countries seeking to regulate AI. #欧盟 #人工智能