Is your school using facial recognition technology on campus?

January 21, 2020

Last week, digital rights group Fight for the Future and campus group Students for Sensible Drug Policy launched a joint campaign calling for a ban on facial recognition technology on college campuses. 

Fight for the Future explained in a press release that the campaign is prompted in part by  facial recognition technology being marketed to schools, and that colleges and universities will soon face tough questions on its use:

Tech and security companies are marketing facial recognition technology to schools as a form of security and convenience. But in reality this technology decreases actual security on campuses, and opens up a pandora’s box of privacy, civil liberties, and equity issues.

Students, faculty, alumni, and community members are signing petitions calling for a complete ban on the non-personal use of facial recognition on their campus. At campuses around the country, including George Washington in DC and DePaul in Chicago, student groups are organizing to introduce student government resolutions to ban facial recognition. 40 major university administrations including Stanford, Harvard, and Northwestern will be contacted to clarify if they are using this problematic technology.

The joint campaign comes on the heels of a previous and largely successful campaign by Fight for the Future to stop the use of facial recognition technology at concerts and music festivals. 

Both domestically and internationally, discussion around the use of facial recognition tech seems to be reaching a fever pitch. The House Congressional Committee on Oversight and Reform recently held its third hearing on the subject, hearing testimony from experts at the intersection of artificial intelligence, ethics, and security. And last week, Politico reported that European Union leaders were considering a five year ban on the use of facial recognition technology in public spaces, a proposal that has divided major tech giants Microsoft and Google. 

Google CEO Sundar Pichai suggested in a conference last week that he may be open to the ban, while Microsoft legal officer Brad Smith likened the idea of an outright ban in the EU to using a meat cleaver instead of a scalpel in an interview with NPR.

Opponents of the technology may have good reason to agree to a temporary moratorium on facial recognition technology in public spaces. Just last year, a report from the ACLU found that Amazon’s facial recognition software, Rekognition, misidentified 28 members of Congress, mistakenly matching them with people in a mugshot database.

Emerging technologies like facial recognition and the increasingly comprehensive monitoring of student activity — both in the real world and online — can magnify already strong concerns about engaging in free speech on campus. In 2018, for example, Campus Safety Magazine reported that the University of Virginia had contracted with a company to scan and monitor student social media posts for use of “harmful” words, which, if found, got the post forwarded to local police. 

FIRE has more than once seen an overarching security apparatus on campus used to investigate students for constitutionally protected speech. Back in October, for example, two University of Connecticut students were arrested for playing a game involving the use of a racial slur. UConn police tracked their movements on campus using security cameras, card swipe data, and Wi-Fi logs, which ultimately led to the students’ arrest.

While surveillance technologies obviously hold some promise in terms of preventing crime, it takes very little imagination to come up with ways that they can be misused to target those on campus whose “offenses” have more to do with expression than they do any kind of actual misconduct. Given the success campuses have had in chilling dissenting speech using only old-fashioned forms of policing, institutions that move toward blanket surveillance will, ironically, need more watching than ever.