According to Cointelegraph: The worldwide ramp-up of generative artificial intelligence (AI) development has triggered an urgent reaction from global governments to regulate the emerging technology, mirroring the European Union's efforts to enforce the world's pioneering comprehensive rules for AI. As public concerns regarding the potential misuse of AI technology grow, nations like the U.S., U.K., China, and other G7 countries also hasten their advances towards AI regulation.
On December 7, after various delays, the EU AI Act, a groundbreaking set of regulations, finalized a set of controls for generative AI tools, including OpenAI's ChatGPT and Google's Bard.
Elsewhere, Australia sought public opinion over eight weeks in June, contemplating a ban on "high-risk" AI tools. The consultation, which included examining options like voluntary measures, specific regulations, or a combination of both, extended until July 26.
China temporarily rolled out regulations from August 15 to oversee the generative AI industry. The measures require service providers to complete security assessments and acquire clearance before launching AI products for the wider market. Following government approvals, four Chinese tech companies, including Baidu and SenseTime, launched their AI chatbots to the public on August 31.
In France, the CNIL privacy watchdog is investigating several complaints about ChatGPT after Italian authorities temporarily banned it over suspected privacy rule breaches. In response, the Italian Data Protection Authority initiated a "fact-finding" investigation on November 22 to scrutinize data collection processes employed to train AI algorithms and verify the security measures in place on public and private websites.
Similarly, the U.S., the U.K., Australia, and 15 other nations have recently introduced global guidelines to secure AI models from tampering, urging companies to integrate security considerations into their design processes.