Suit: Company built 'most dangerous' facial recognition AI database in nation

More than 5000 protesters protested once again against the so-called 'Global Security Law' bill promoted by French President Macron and his majority. The 'Global Security Law' bill will also forbid anyone to photograph or film police members if not f

Civil liberties activists are suing a company that provides facial recognition services to law enforcement agencies and private companies around the world, contending that Clearview AI illegally stockpiled data on 3 billion people without their knowledge or permission.

The lawsuit, filed Tuesday in Alameda County Superior Court in the San Francisco Bay Area, contends that the New York-based firm violates California’s constitution and seeks an injunction to bar it from collecting biometric information in California and requiring it to delete data on Californians.

The lawsuit says the company has built "the most dangerous" facial recognition database in the nation, has fielded requests from more than 2,000 law enforcement agencies and private companies, and has amassed a database nearly seven times larger than the FBI’s.

The lawsuit was filed by four activists and the groups Mijente and Norcal Resist, who have supported causes such as Black Lives Matter and have been critical of the policies of U.S. Immigration and Customs Enforcement, which has a contract with Clearview AI.

"Clearview has provided thousands of governments, government agencies, and private entities access to its database, which they can use to identify people with dissident views, monitor their associations, and track their speech," the lawsuit contends.

The lawsuit said Clearview AI scrapes dozens of internet sites, such as Facebook, Twitter, Google and Venmo, to gather facial photos. Scraping involves the use of computer programs to automatically scan and copy data, which the lawsuit says is analyzed by Clearview AI to identify individual biometrics such as eye shape and size that are then put into a "faceprint" database that clients can use to ID people.

The images scraped include those posted not only by individuals and their family and friends but also those of people who are inadvertently captured in the background of strangers’ photos, according to the lawsuit.

The company also offers its services to law enforcement even in cities that ban the use of facial recognition, the lawsuit alleges.

Several cities around the country, including the Bay Area cities of Alameda, San Francisco, Oakland and Berkeley, have limited or banned the use of facial recognition technology by local law enforcement.

"Clearview AI complies with all applicable law and its conduct is fully protected by the First Amendment," said a statement from attorney Floyd Abrams, representing the company.

The company has said it saw law enforcement use of its technology jump 26% following January’s deadly riot at the U.S. Capitol.

Facial recognition systems have faced criticism because of their mass surveillance capabilities, which raise privacy concerns, and because some studies have shown that the technology is far more likely to misidentify Blacks and other people of color than whites, which has resulted in mistaken arrests.

However, Clearview AI’s CEO, Hoan Ton-That, said in a statement that "an independent study has indicated the Clearview AI has no racial bias."

"As a person of mixed race, having non-biased technology is important to me," he said.

He also argued that the use of accurate facial recognition technology can reduce the chance of wrongful arrests.

The lawsuit said Facebook, Twitter, Google and other social media firms have asked Clearview AI to stop scraping images because it violated their terms of service with users.

Clearview AI also is facing other challenges. A lawsuit filed in Illinois alleges the company violates that state’s biometric privacy act, while privacy watchdogs in both Canada and the European Union have issued statements of concern.

Clearview stopped operations in Canada last year. But privacy commissioners this year asked the firm to remove data on Canadian citizens, with one commissioner arguing that the system puts all Canadians "continually in a police lineup."