The EU has unveiled a plan to regulate the sprawling field of artificial intelligence. The regulations are aimed at helping Europe catch up in the new tech revolution while curbing the threat of abuses which could erode individual privacy rights.
Brussels wants to provide a clear legal framework for companies and individuals across the bloc's 27 member states.
"With these landmark rules, the EU is spearheading the development of new global norms to make sure artificial intelligence (AI) can be trusted," EU competition chief Margrethe Vestager said.
"By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way."
The European Commission, the bloc's executive arm, has been preparing the proposal for more than a year. A debate involving the European Parliament and 27 member states is scheduled to continue for months before a definitive text is produced.
The EU is hoping to catch up with the US and China in a sector that includes fields such as voice recognition, health insurance and law enforcement.
The bloc is trying to learn the lessons after missing out on the internet revolution and failing to produce any major competitors to match the giants of Silicon Valley or their Chinese counterparts.
But there have been competing concerns over the plans, with both big tech and civil liberties groups arguing that the EU is either overreaching or is not going far enough.
Brave new world or intolerable nightmare?
"Today's proposals aim to strengthen Europe's position as a global hub of excellence in AI from the lab to the market," EU internal market commissioner Thierry Breton said.
The draft regulation lays out a "risk-based approach" that would lead to bans on a very limited number of uses that are deemed as presenting an "unacceptable risk" to EU fundamental rights.
This would make "generalised surveillance" of the population off-limits as well as any tech "used to manipulate the behaviour, opinions or decisions" of citizens.
Anything resembling a social rating of individuals based on their behaviour or personality would also be prohibited.
The proposed regulation requires companies to get a special authorisation for applications deemed "high-risk" before they reach the market.
These systems would include "remote biometric identification of persons in public places" -- including facial recognition -- as well as "security elements in critical public infrastructure".
Special exceptions are envisioned for allowing the use of mass facial recognition systems in cases such as searching for a missing child, averting a terror threat, or tracking down someone suspected of a serious crime.
Military applications of artificial intelligence will not be covered by the rules.
Big Brother and the guys from Gafa are concerned
Last year, Google warned that the EU's definition of artificial intelligence was too broad and that Brussels must refrain from over-regulating a crucial technology.
Civil liberties activists have warned that the rules do not go far enough in curbing potential abuses in the cutting-edge technologies.
"It still allows some problematic uses, such as mass biometric surveillance," said Orsolya Reich of umbrella group Liberties.
"The EU must take a stronger position... and ban indiscriminate surveillance of the population without allowing exceptions."