Six Questions to Ask Before Accepting a Surveillance Technology
If police in your community are saying they want to install a new surveillance technology — face recognition, cameras, or license plate scanners, for example — they’re likely to be touted as the way to prevent all manners of evil, from terrorism to street crime to fraud to package theft. If we just record everything, surveillance boosters would have us believe, we can stop or solve crimes and life will be better. The authorities will also probably have specific stories they tell you — hypothetical or real — in which the technology saved the day.
How should we process those claims? If the technology can do some real good, should we accept it?
We humans naturally think in stories, and a compelling anecdote, narrative, or mental image, particularly one that evokes fear, frequently defeats all rational argumentation. But that’s often a terrible way to make decisions that shape the fundamental contours of power in our society. Ideally, public debates around surveillance technologies would revolve not around particular “movie plot” scenarios, but around a more rational, systematic, and broadly humanistic vision of technology and its role in our society.
Surveillance opponents use stories too, but law enforcement and other operators of surveillance tech typically have a big advantage: they can put their success stories on television while burying their failures. In 2014, for example, the police in Chicago announced the sentencing of a robber who may have been the first criminal caught by face recognition. But how many false leads did the police chase in that and other cases before they caught that first, highly publicized suspect? How many people were investigated, interrogated, intimidated, frightened, or had their privacy invaded because of this technology, to produce the success story that the police touted? We may never know, and are unlikely to find out.
Side effects from surveillance can include the loss of privacy, the possibility of abuse, chilling effects on creativity and freedom of expression, and disparate racial impacts that worsen existing social injustices.
So how do communities, policymakers, and political leaders avoid being snookered — either by corporate or police department public relations departments, or by our own human tendency to be guided by stories and anecdotes? A good way to make more sophisticated decisions is by asking ourselves these six questions.
1. Does the technology work?
In many ways this is the threshold question, because if a technology doesn’t work, then we can stop there. There’s no reason to waste time debating privacy, or safety, or other values. Of course, most technologies work at least some of the time, in which case the question is: How well does it work? Does it fail 5 percent of the time or 95 percent? And how do we know? Can we trust the information we’re given about that rate?
Take face recognition, for example. Vendors started pushing the technology hard right after 9/11, but at that time it was highly ineffective, and deployments, though dangerous, also verged on the silly. The dawn of machine learning made the technology much more effective, though it still has error rates that are very much relevant to conversations about the technology. New technologies, in particular, often perform badly, but local officials often don’t have the expertise to cut through hype and sales jobs and recognize snake-oil when they see it.
2. How effective is the technology?
Even if the technology does what it claims, does it solve the problem it’s aimied at solving? Even a technology that works perfectly may not stop bad things very often, depending on the details and context of its deployment. A metal detector, for example, might detect metal 100 percent of the time — but fail to detect plastic explosives or ceramic guns. Even a face recognition algorithm that is nearly 100 percent effective can be defeated by things as simple as a baseball cap, mask, or sunglasses. There are many similar technological equivalents of the Maginot Line, the heavily fortified defensive frontier built by the French before World War II, which was rendered useless when Hitler’s army simply went around it.
3. How big is the danger the technology will allegedly reduce?
How serious are the bad things the technology claims to prevent, and how frequent or likely are those things? If a technology only saves the day every 20 years, but “saving the day” means preventing a global pandemic or nuclear attack, that could justify steep costs. On the other hand, if success means preventing somebody from jaywalking, that would be a different balance even if it happens many times a day.
4. What are the negative side-effects of the technology?
Even if a technology is effective and important, what are its downsides? We might be able to prevent the smuggling of weapons from other parts of the world if we close our borders, but nobody is willing to accept the enormous consequences that measure would have. We might cut down on domestic violence and other crimes if we allowed the government to install cameras in everyone’s bedrooms, but we’re not willing to accept the side effects of such a step. Side effects can include the loss of privacy, the possibility of abuse, chilling effects on creativity and freedom of expression, and disparate racial impacts that worsen existing social injustices — all of which could be produced by our example of face recognition — as well as more tangible things like pollution, noise, and economic harm.
“Security” is the most common justification for new surveillance, but that is a term that should be viewed holistically. It’s true that theft or physical attack can harm people’s happiness and make them feel unsafe, but so can many other things — such as oppressive surveillance and violent police officers. For example, if a “security” drone flies over my yard, do I have to worry that it will record me and my friends smoking weed, get my house raided by a SWAT team, and leave me with lasting feelings of violation and insecurity? That kind of degradation in people’s security, properly conceived, is a side effect of surveillance technology that we should be especially alert to.
5. What are the opportunity costs of spending resources on the technology?
Every dollar spent on high-tech surveillance devices means a dollar not spent on other community improvements that might do much more to improve the lives of its residents. In a rational world, money would be spent first on measures that will bring the greatest improvements to the greatest number of people’s lives, and something like expensive cameras to protect against rare or minor threats would not be allowed to vault to the top of the list just because they’re sold via a vivid story. Face recognition, for example, in addition to producing bad side effects such as chilling effects, may soak up public funds that could be used to help a community address social problems, become more prosperous, and enjoy improved physical infrastructure.
6. Does the community want it?
A technology can’t be evaluated without considering the answers to the above questions, but there’s no mathematical formula for measuring those variables or computing how they should balance against each other. That will inevitably be a judgment call. But since we live in a democracy, that judgment should be made openly and democratically by each community, not unilaterally or in secret by police chiefs or other public servants. That’s why we have been educating communities around the nation on the advantages of enacting “Community Control Over Police Surveillance” or CCOPS bills, which require law enforcement to get permission from their city council (or other elected oversight body) before deploying new surveillance technologies. Seattle learned the wisdom of this the hard way in 2013 when it had to return a surveillance drone it had quietly purchased because the community objected vehemently to the technology. Nowadays I see many of the smarter police chiefs consult with their communities before deploying a new surveillance technology, whether or not their city has enacted a CCOPs ordinance. A number of communities have banned their police from using face recognition, and there are surely others that would react badly if it were introduced.
The next time you hear someone pushing a new surveillance technology by telling a story about how it saved the day by stopping something bad, remember that it’s important to dig deeper and seek a fuller picture of the technology and its place in your community.