Sunday, October 1, 2023

EU wants to hook companies on harmful AI

Must read

Shreya Christinahttps://cafe-madrid.com
Shreya has been with cafe-madrid.com for 3 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider cafe-madrid.com team, Shreya seeks to understand an audience before creating memorable, persuasive copy.

The new bill, called the AI ​​Liability Directive, will add teeth to the EU’s AI law, which will become EU law around the same time. The AI ​​law would require additional controls for use of high-risk AI that could harm people, including policing, recruiting or healthcare systems.

The new liability law would give people and companies the right to claim damages after being harmed by an AI system. The goal is to hold developers, manufacturers and users of the technologies accountable and oblige them to explain how their AI systems are built and trained. Tech companies that don’t follow the rules are at risk of collective action across the EU.

For example, job seekers who can prove that an AI resume screening system discriminates against them could ask a court to force the AI ​​company to grant them access to information about the system so they can identify those responsible and find out what went wrong. Armed with this information, they can file a lawsuit.

The proposal still has to work its way through the EU’s legislative process, which will take at least a few years. It will be amended by Members of the European Parliament and EU governments and is likely to face intense lobbying from tech companies, which: claim that such rules could have a “horrifying” effect on innovation.

Whether it succeeds or not, this new EU legislation will have a ripple effect on how AI is regulated globally.

In particular, the bill could have a negative effect on software development, says Mathilde Adjutor, Europe’s policy manager for the technology lobby group CCIA, which represents Google, Amazon and Uber, among others.

Under the new rules, “developers run the risk of not only being held liable for software errors, but also for the potential impact of software on users’ mental health,” she says.

Imogen Parker, associate director of policy at the Ada Lovelace Institute, an AI research institute, says the bill will shift power from corporations to consumers — a correction she sees as particularly important given AI’s potential to discriminate. And the bill will ensure that when an AI system does damage, there is a common way to claim damages across the EU, says Thomas Boué, head of European policy for technology lobby BSA, of which Microsoft and IBM are members.

However, some consumer rights groups and activists say the proposals don’t go far enough and set the bar too high for consumers who want to make claims.

More articles

Latest article

Contents