And outside the EU?
The GDPR, the EU’s data protection regulation, is the bloc’s most famous tech export and has been copied everywhere from California to India.
The EU’s approach to AI, targeting the most risky AI, is one that most developed countries agree on. If Europeans can create a coherent way to regulate the technology, it could act as a template for other countries hoping to do the same.
“U.S. companies, in their compliance with the EU AI Act, will also raise their standards for U.S. consumers regarding transparency and accountability,” said Marc Rotenberg, head of the Center for AI and Digital Policy, a nonprofit organization that tracks AI. . policy.
The bill is also being closely monitored by the Biden administration. The US is home to some of the world’s largest AI labs, such as those of Google AI, Meta, and OpenAI, and runs several different global ranking in AI research, so the White House wants to know how any regulation might apply to these companies. For now, influential US government figures such as National Security Advisor Jake Sullivan, Secretary of Commerce Gina Raimondo and Lynne Parker, who leads the White House’s AI efforts, have welcomed Europe’s efforts to regulate AI.
“This is in stark contrast to how the US viewed the development of the GDPR, which at the time people in the US said would end the internet, obscure the sun and end life on the planet as we know it,” says Rotenberg.
Despite some unavoidable caution, the US has good reason to welcome the legislation. It is deeply concerned about China’s growing influence on technology. For America, the official position is that maintaining Western technology dominance is a question of whether “democratic values” prevail. It wants to keep the EU, a “like-minded ally,” close to.
What are the biggest challenges?
Some requirements of the bill cannot be met technically at this time. The first draft of the bill requires data sets to be error-free and for people to be able to “completely understand” how AI systems work. The datasets used to train AI systems are vast, and a human check to verify that they are completely error-free would take thousands of hours of work, if it were possible to verify such a thing at all. And today’s neural networks are so complex that even their creators don’t quite understand how they arrive at their conclusions.
Tech companies are also very uneasy about requirements to allow third party auditors or regulators to access their source code and algorithms to enforce the law.