EU outlines ambitious AI regulations focused on risky uses

European Commission President Ursula von der Leyen announced Wednesday's the world's first government-proposed restrictions on the use of artificial intelligence.

"Artificial intelligence must serve people and therefore artificial intelligence must always comply with people's rights," she said.

"With some exceptions, real-time facial recognition in public areas could be banned," social media law professor Pedram Tabibi said.

Tabibi recognized this proposal as very much a first draft of a rulebook for ethical A.I. use by companies and governments utilizing the tech in law enforcement, courtrooms, self-driving vehicles, test-grading and much more. A final version of these restrictions might take years to create and pass. For those in the United States wondering when this country might start discussing similar regulation, Tabibi reminded us technology often crosses political borders.

"If these regulations pass," Tabibi said, "they may already affect American companies because these regulations would affect A.I. systems that are offered in [American] products and systems in the EU.""[The European Commission] really want[s] the companies building this stuff to explain themselves and show how people won't be harmed by the A.I.," tech expert Lance Ulanoff said.

Ulanoff thought the EU's proposal, in most regards, appeared reasonable and that A.I. regulation was both inevitable and necessary, given the technology's present and future role in every realm of our lives."Just about everything we do in the world produces data," Ulanoff said.

And anything that produces data might utilize artificial intelligence, that if used or built carelessly or maliciously, could jeopardize our privacy.

Get breaking news alerts in the FOX 5 NY News app. Download for FREE!

"When you train an A.I. with data and you write the program that's going to interpret that data," Ulanoff said, "the bias of the programmer can come in, the bias of the data that was put into the system can come into play.""There could potentially be significant fines," Tabibi said, "including as much as 6% of a company's annual worldwide revenue."

For a company like Google that would have meant nearly $11 billion last year and likely even more in years to come.

"I'm hoping we still get to use A.I. because it's going to play a very important role in so many parts of our lives," Ulanoff said.

"So-called high-risk A.I. -- this is A.I. that potentially interferes with peoples' rights -- have to be tested and certified before they reach our market," von der Leyen said.