6 key battles ahead for Europe’s AI law

Policymakers, industry and activists are likely to fight over many parts of the bill.

6 key battles ahead for Europe’s AI law

The European Commission has unveiled its new rulebook for artificial intelligence, a power play that seals the bloc’s reputation as a global rulemaker for tech.

Now comes the hard part — convincing lawmakers, lobbyists and national politicians that its Artificial Intelligence Act is fit for purpose and won’t hobble innovation or hurt fundamental rights.

With much of the battle yet to come, POLITICO outlines key obstacles, controversial provisions and flashpoints that are likely to be the focus of major battles between policymakers, activists and industry reps in months ahead.

1. Bans, and how they are worded

The regulation outlines a list of banned artificial intelligence applications, but the devil is in the detail.

How watertight such bans are will come down to the precise wording of the bill’s articles — such as a clause that bans practices which “manipulate persons through subliminal techniques beyond their consciousness,” or another bans tech that “exploits vulnerabilities” of groups including children or people with disabilities are good examples.

Could this affect Facebook’s plan for an Instagram-like product (which uses algorithms to show relevant content to users) for children? Maybe, or maybe not.

Other bans include government-conducted social scoring, a system introduced by China to measure an individual’s trustworthiness and promote specific behaviors. Real-time biometric recognition systems, such as facial recognition, will be banned for law enforcement purposes unless it is necessary to find kidnapping victims, respond to terror attacks or find criminals. Privacy activists and some MEPs have pushed for blankets bans on the practice.

The proposal allows the use of controversial technologies that claim to recognize emotions or certain biometric traits like gender, race, sexuality and political orientation, which which many human rights activists believe to be unethical. There will likely be a push to get those applications banned. 

2. Defining ‘high-risk’

Another list likely to be contested is the regulation’s list of high-risk AI uses. These are uses identified by the Commission as having the most potential to harm society and individual rights, and are subject to strict rules. They include use in transport (think self-driving cars), education, employment, credit scoring or benefits applications. Uses in law enforcement also have a higher bar, such as in asylum and border control management, and in the justice system. Whether that list is expanded or whittled down will be determined by the lobbying battles ahead.

3. Conformity assessments

Only high-risk AI systems that have gone through conformity assessment procedures will be allowed in the EU. These standards will face a lot of scrutiny. The degree of self-regulation will also be key here. National authorities will conduct checks and inspections, but some AI providers hoping to roll out their products in employee recruitment or migration control, for example, will be allowed to meet EU standards through self-assessments. The self-assessment requirement will please the tech industry, but has raised alarm among skeptical MEPs, activists and academics, who argue this is not enough to protect citizens, as it puts compliance in the hands of AI providers. The tussle between the two sides will determine how easy or difficult compliance will be under the rules.

4. A new AI board

The regulation also will establish a European Artificial Intelligence Board, which is meant to supervise the law’s application and share best practices. Board members could be influential in deciding what uses get classified as “high risk” in the future. Daniel Leufer of Access Now, a digital rights group, criticized the creation of a new body to oversee the law, a move that could create “confusion and disharmony.” Instead, he argued data protection authorities “should be given additional resources and expertise on AI to be responsible for the application, implementation and enforcement of this regulation.”

5. Parliamentary scrutiny

Parliament is likely to push for a harder line on the several exceptions and wiggle room present in the proposal. Two letters to Commission President Ursula von der Leyen went out last week in response to a leak of the proposal. The first one, signed by 40 MEPs called for an outright ban on facial recognition surveillance, and criticized the proposal’s authorization system for biometric surveillance. The second letter called for stronger anti-discrimination language, and bans on predictive policing — whereby algorithms determine when and where a crime might be committed, and by whom — and the use of AI tech in border control. Both sets of MEPs also want to ban automated recognition of sensitive characteristics such as gender, sexuality and race.

6. Negotiations with Council

Whatever the Parliament’s version of the bill looks like, it is likely set up a showdown with Council. The exceptions for law enforcement in the prohibitions are likely to please security-minded countries such as France, which is keen to integrate some AI-powered systems into its security apparatus. Late last year, a group of 14 countries, led by Denmark, published a nonbinding paper calling for a more “flexible” framework with voluntary self-labeling schemes. Though more privacy-minded countries like Germany could temper the Council’s version of the bill before going in to negotiations with the Parliament.

Source : Politico EU More