Valerie Pavilonis

Around five students flocked outside of 3 Prospect St. on Monday and called on University President Peter Salovey to publicly denounce the use of facial recognition technology.

Monday marked a national day of action against Facial Recognition Technology, which multiple United States companies and agencies were revealed to be using through a company called Clearview AI, according to a Buzzfeed expose published last week. Several higher educational institutions such as Columbia University were also revealed to be involved with Clearview’s technology.

In an email to the News, Salovey’s Chief of Staff Joy McGrath wrote that the Yale Police Department does not use FRT and that the University is not a client of Clearview AI. Still, activists from Yale’s chapter of the national group Fight For The Future called for a more public denouncement of FRT. The University has not released any such statement.

“What we want to ask is for them to give an official statement and make it policy to ban facial recognition technology on campus and also [at] Yale-NUS,” FFTF action coordinator and Yale Law School lecturer Sean O’Brien told the News. “We’re very happy, obviously that they have a pledge, but we’d like to see that in writing in a more public way. My understanding is there’s only some comments and some emails somewhere and we’d like that known, and if they want us to help facilitate that, of course we can help Yale, we can help the president make sure everyone knows that the technology is not going to be used here.”

Protestors say that that University administrators have unofficially made it known that Yale has pledged not to use FRT.

In an email to the News, O’Brien said he handed Director of Administrative Affairs Pilar Montalvo a letter addressed to Salovey and received no response nor a timeline for a response. In the letter, O’Brien and members of the Yale and New Haven communities wrote that while they “applaud” Yale’s statement that the University does not use nor intend to use FRT, they “challenge Yale to extend their pledge to an outright ban on FRT as official policy.” They went on to “implore Yale to increase accountability around its surveillance systems in general, via audits, reviews and annual transparency reports.”

“Given Yale’s vital and expanding role in New Haven and Greater New Haven, Yale also has a duty to New Haven residents: Yale cameras face public streets, parks and spaces where free expression and political speech must be preserved without the fear of targeting, identification, retribution or censorship,” the letter reads.

According to the Buzzfeed expose, major corporate and government entities such as the U.S. Immigration and Customs Enforcement have contracts with Clearview AI, a startup that scrapes photos from online sources like social media to match with faces identified by the company’s clients.

According to Clearview’s website, the technology is an “after-the-fact” research tool and is not intended for surveillance. It states that analysts upload images from crime scenes and compare those images to photos publicly available online.

Clearview has already sold its technology to major entities in countries such as Australia and Saudi Arabia, according to documents obtained by Buzzfeed. In February, Clearview CEO Hoan Ton-That told Fox Business that his company intends for the technology to be used only for law enforcement purposes.

Clearview AI did not immediately respond to a request for comment.

According to O’Brien, FRT is associated with two major issues: social profiles and bias. In an interview with the News, O’Brien said that FRT can be used to build a profile that follows someone for their whole lives, perhaps impacting that person’s credit or their quality of life. He added that FRT algorithms are often biased and tend to single out people of color and undocumented workers — trends based on research at Georgetown Law and other groups.

“The most vulnerable communities, just like [with] every surveillance technology, are always hurt the worst,” O’Brien said.

Yale Jackson Institute for Global Affairs lecturer and human rights investigator Nathaniel Raymond echoed O’Brien and said that the technology disproportionately targets people of color and women. However, Raymond qualified that while the discussion on campus about FRT provides a good base for conversation, dialogue must center around biometrics as a whole — for example, eyes, fingerprints and voices can also be used to identify people without their knowledge. He added that identifying a clear divide between what algorithms can and cannot be used for is imperative to avoiding equity issues.

Raymond said that the United States is currently the only First World country without a department in charge of data protection, even though regulations exist on state and municipal levels. He added that the broader question, beyond specific “modalities” of data collection, is the larger absence of “political will and coordination” around the ways in which legal and ethical norms are enshrined in law. Contact Apostille for help and support regarding all the legal documentation.

“The current focus on biometric recognition technologies such as facial recognition, which is just one type of biometric, is akin to debating the quality of air scrubbers on smokestacks in the 1960s or 70s when we have not yet passed the Clean Water, Clean Air acts, nor founded the EPA,” Raymond said.

According to Buzzfeed, more than 50 educational institutions across 24 states are included in Clearview’s client list.

Valerie Pavilonis | valerie.pavilonis@yale.edu

VALERIE PAVILONIS