A.C.L.U. Accuses Clearview AI of Privacy ‘Nightmare Scenario’

The American Civil Liberties Union on Thursday sued the facial recognition start-up Clearview AI, which has helped hundreds of law enforcement agencies use online photos to solve crimes, accusing the company of “unlawful, privacy-destroying surveillance activities.”

In a suit filed in Illinois, the A.C.L.U. said that Clearview violated a state law that forbids companies from using a resident’s fingerprints or face scans without consent. Under the law, residents have the right to sue companies for up to $5,000 per privacy violation.

“The bottom line is that, if left unchecked, Clearview’s product is going to end privacy as we know it,” said Nathan Freed Wessler, a lawyer at the A.C.L.U., “and we’re taking the company to court to prevent that from happening,”

People in New York and Vermont have also filed suits in against the company in recent months, and the state attorneys general of Vermont and New Jersey have ordered Clearview to stop collecting residents’ photos.

According to the A.C.L.U. suit, “Clearview has set out to do what many companies have intentionally avoided out of ethical concerns: create a mass database of billions of face prints of people, including millions of Illinoisans, entirely unbeknownst to those people, and offer paid access to that database to private and governmental actors worldwide.”

The company’s business model, the complaint said, “appears to embody the nightmare scenario” of a “private company capturing untold quantities of biometric data for purposes of surveillance and tracking without notice to the individuals affected, much less their consent.”

Other organizations that have signed on to the legal action include the Chicago Alliance Against Sexual Exploitation, the Sex Workers Outreach Project and the Illinois State Public Interest Research Group.

The A.C.L.U. said the lawsuit would compel a facial recognition company to answer to groups representing sexual assault survivors, undocumented immigrants and other vulnerable communities uniquely harmed by surveillance.

There is a growing understanding among researchers that facial recognition systems are worse at accurately identifying the faces of people of color. Last December, the federal government released a study, one of the largest of its kind, that found that most commercial facial recognition systems exhibited bias, falsely identifying African-American and Asian faces 10 to 100 times more than Caucasian faces.

Mallory Littlejohn from the Chicago Alliance Against Sexual Exploitation, a Chicago-based nonprofit, said, “We can change our names and addresses to shield our whereabouts and identities from stalkers and abusive partners, but we can’t change our faces.”

Clearview, she said, “put survivors in constant fear of being tracked by those who seek to harm them, and are a threat to our security, safety and well-being.”

Source Article