Forbes has an article today about a company that is creating its own database of photos of allegedly dangerous people so that its customers can use face recognition to alert them to their presence. It’s not something I’d heard about before, but once I saw this product it instantly expanded my thinking about the role that face recognition may play in our society. And not in a good way.
The company, Terrogence, describes the product, called Face-Int, as “a massive and growing database of annotated faces and face data, highly suitable for advanced biometric security applications.” The company says that it
actively monitors and collects online profiles and facial images of terrorists, criminals, and other individuals believed to pose a threat to aviation security, immigration and national security. The Face-Int database houses the profiles of thousands of suspects harvested from such online sources as YouTube, Facebook and open and closed forums all over the globe. It represents facial extractions from over 35,000 videos and photos retrieved online portraying such activities as terrorist training camps, motivational videos and actual terrorist attacks….
When exported to a biometric system, the Face-Int™ database allows for face captured images to be cross-referenced with existing profiles, so that suspects can be identified and apprehended within minutes of on-camera detection.
Forbes could not establish whether any agency is currently using this database, though it found that the company behind it definitely has contracts with U.S. government agencies for other products. But let’s assume that the company has not built this product without a single client. What are we to think of it?
To begin with, as a privatized watch list this product raises all the same issues as our troubled government watch list programs. We at the ACLU have been fighting agencies for years over the profound civil liberties problems with watch lists: people being put on the lists without being told why or given the chance to see the information behind their listing, the lack of an effective means to challenge placement and get off the list, and the absence of transparency over how the lists are being used or shared. The Terrogence database takes all of the due process and other fairness problems with government watch lists and adds another whole layer: private involvement.
This company represents an example of the widely noted trend of the privatization of intelligence — the strange, ideologically driven intrusion of private companies into the basic functions of our intelligence establishment, which accelerated during the Bush years after 9/11. The privatization of watchlisting raises questions about the extent to which such companies:
- Are insulated from even the limited and inadequate checks and balances that apply to government agencies, such as the Freedom of Information Act (FOIA) and Congress’s “power of the purse.”
- May be incentivized by the profit motive to engage in additional wrongdoing that government agencies, for all the abuses they already commit, would not be tempted by.
- May exploit their corporate status to help government agencies evade checks and balances that apply to the government but not corporations, such as the Fourth Amendment and the Privacy Act.
How, especially without FOIA, will we know how companies are compiling these databases, what kind of judgments they’re making about people, what data those judgments are based on, and how accurate that data is? What processes, if any, will such companies establish to hear appeals? Anyone who has had their Facebook account suspended or needed to appeal a decision made by a large tech company knows that these companies’ appeals procedures are highly inadequate and often leave users helpless and infuriated. Meaningful due process requires human time and attention and costs money, and companies don’t like to spend money.
So, insofar as this database is part of the national security establishment, there are a lot of problems with that. But the company also talks about including photos of “criminals” in its database, and Forbes reports that the company is
also involved in other, more political endeavors. One ex-staffer, in describing her role as a Terrogence analyst, said [on LinkedIn that] she’d “conducted public perception management operations on behalf of foreign and domestic governmental clients,” and used “open source intelligence practices and social media engineering methods to investigate political and social groups.”
And that brings us firmly into the domestic sphere.
Where will this take us?
Whatever this particular company is doing right now, it’s easy to imagine that we could see an entire marketplace emerge of private, quasi-vigilante companies crafting blacklists of all kinds.
- Companies could come to peddle photo watch lists incorporating confirmed international terrorists, suspected local shoplifters, and anything in between.
- Vendors could sell such products to an expanding base of clients, down to and including the proprietors of local corner shops to plug in to their behind-the-counter surveillance cameras.
- Databases might draw on mug shot databases, for example, to sell photo blacklists that purport to sound alerts when anybody convicted of a violent crime enters the scene. Or a sex offender, or, for that matter, anybody with a criminal record at all. Or anybody that some proprietary algorithm has decided is worth warning about.
- Such a marketplace would inevitably encompass databases with varying degrees of quality, responsibility, or political, racial, ethnic, and/or religious bias. They might sell databases of labor organizers to anti-union companies or of corporate critics to companies that are the target of consumer, environmental, animal-rights, or any other stripe of activism. Anti-immigrant vigilante groups might even compile photo databases of undocumented immigrants.
Already we know that private companies in the “risk mitigation” market for banks are compiling private terrorist watch lists. At least a few major retailers are compiling their own photo blacklists and hooking them up to their in-store surveillance cameras. We know that stores are starting to compile and share not-always-accurate blacklists of accused shoplifters and other “troublemakers,” without formal due process protections. Get in a dispute with a clerk after rude treatment or fraudulent service by a store? You might find yourself thrown on one of these watch lists by an angry employee, and it’s not clear what you could do about it. Landlords are building similar lists.
Face recognition raises the stakes of existing problems with these growing systems. What happens if private companies begin regularly scraping photos from increasingly plentiful sources, including surveillance cameras, and combining those photos with personal information in order to make judgments about people — whether it’s “Are you a terrorist” or “Are you a corporate critic,” or anything else? Such a business model would:
- Expose everybody to the risk of being misidentified. Companies, in order to brag about how many tens of thousands of photos they’ve collected, have an incentive to draw photos from a wide variety of sources, including surveillance videos that might not be very high quality. In addition, studies have shown that face recognition can be less accurate in trying to identify people of color.
- Expose everybody to the risk of being misjudged. Photos of people’s faces are only half of what companies would be offering under this model. The other half is information about them: that you are allegedly a terrorist, shoplifter, an activist, or whatever other list a company is selling. That could range from detailed information (correct or erroneous) about you that pops up when you appear in front of somebody’s surveillance camera — or simply the fact that you are included on such a list.
- Make people very hesitant to publicly post photos of themselves online. It might only take a few incidents of unfortunate, highly publicized mistakes before people start to become self-conscious about allowing photos of themselves to be published in places where they can be scraped by any dodgy outfit selling face-photo blacklists of who-knows-what nature.
Speaking of scraping, it’s important to note the role that the world’s largest collection of facial photographs could play here: Facebook’s. At the ACLU we have consistently pushed for the company to give individuals control over whether their profile picture is made public or not. In his testimony before Congress last week, Mark Zuckerberg made clear that the company wants to keep profile pictures classified as “public information” outside of any user control options. That leaves them vulnerable to automated collection by this kind of industry.
Finally, let me note the very long history of private companies and government agencies working together to create databases and watch lists about people in the United States. During the labor, civil rights, antiwar, and other social justice movements of the 20th century, there were a number of private databases created by shady collections of right-wing vigilantes and super-patriots who took it upon themselves to compile dossiers on progressive activists. These private databases, such as the San Diego Research Library and the Western Goals Foundation, were often shared with police and government agencies and thus took on quasi-official roles in the efforts of police “intelligence” arms to combat those progressive movements, while remaining outside the normal checks and balances of government.
As cameras and face recognition technology continue to proliferate in American life, the prospect of a market for these kinds of databases is a reminder that face recognition threatens to bring some sweeping changes to the nature of public life. I don’t know how likely it is that this phenomenon of private face-photo blacklists will become a big part of that impact, but the notion is a frightening one, which should serve as an urgent warning to policymakers about the need for privacy protections when it comes to face recognition.