Experts: Companies ‘underestimate risk’ as first provisions of EU AI Act come into force

AI Act

A European Union-wide ban on AI systems with “unacceptable” risk came into force on Feb. 2 as the first provisions of the EU’s AI Act took effect. Problems persist, however, over what the legislation requires and what corporate practices or uses of data may risk flouting the rules.

The part of the AI Act that has come into effect first concerns technologies that potentially risk causing the greatest harm to consumers. It prohibits systems that could have the intended–or unintended–consequences of undermining fundamental human rights and includes AI systems that exploit vulnerable people; engage in deception, manipulation, or use subliminal techniques; use social scoring for public or private purposes; exploit biometric data in real-time or for categorization purposes; recognize emotions in the workplace; scrape internet or CCTV for facial images to build up or expand databases; and conduct certain types of predictive risk profiling.

The maximum penalty for a breach is €35 million (U.S. $38.2 million) or up to 7 percent of worldwide annual turnover (whichever is higher), though penalties will not apply until Aug. 2 (which gives companies months to prepare and is when the full provisions of the act come into effect).

THIS IS MEMBERS-ONLY CONTENT

SINGLE MEMBERSHIP                                             CORPORATE MEMBERSHIP

You are not logged in and do not have access to members-only content.

If you are already a registered user or a member, SIGN IN now.