How facial recognition presents a growing threat to privacy

How facial recognition presents a growing threat to privacy

Is facial recognition a natural tool for safer streets, or the final jump into state surveillance? When Apple introduced its new facial recognition technology in 2017, they made headlines. Known as FaceID, Apple’s facial recognition system is unlocking devices with smiles. Unfortunately, in September 2018, the technology made new headlines, for the wrong reasons. According to Forbes, the FBI used FaceID to gain access to the data on a suspect’s phone. Unlike a passcode, active consent is not required to access information protected by facial recognition. The ‘key‘ is in our facial features.

Police aren’t the only ones using the new technology. Facial recognition was at the 2018 Rose Bowl, during Taylor Swift’s performance. Facebook now uses facial recognition to reduce the number of fraudulent accounts. In a world where everyone is sharing photos, the ‘selfie’ can unlock details about who we are to total strangers. Technology can tag and track us, all without our knowledge or without a choice. Time to literally face the facts. A new battle between privacy and surveillance is brewing.

How much of personal privacy at stake?

Part of facial recognition ‘s threat to privacy is how fast the technology is advancing. It presents a classic case of function creep, where software is implemented and used before the full consequences are understood.

The public needs much more awareness of what the technology is, and how it is in use. We need to recognize:

  • Facial recognition is advancing rapidly.
  • The technology is imperfect: we must be aware of flaws when making decisions.
  • Facial recognition is now in use by law enforcement.
  • The technology isn’t inherently good or bad. How do we balance significant benefits while safeguarding essential freedoms?
  • Can we refuse to participate in the use of facial recognition?
  • Who are the digital rights organizations defending our right to say no?
  • Privacy laws do apply to the technology.
  • If you have been subject to facial recognition, what are your rights?  

Facial recognition isn’t the future, it’s the now

Facial recognition was once the stuff of dreams, a future technology for the next generation to deal with. Alas, this is no longer the case. The stark reality is that facial recognition now exists to the point of mainstream. Systems are being bought, brought in, and tried in all sorts of situations, many times without the consent of the people whose faces are being scanned. When software vendors and engineers pitch their latest efforts, organizations pay attention. Facial recognition is a tool that offers new possibilities. It can identify individuals without requiring them to knowingly interact with the technology. It can allow for new levels of data analysis, understanding, and personalization.

The potential applications for facial recognition appear almost endless. On the market we have homes that can recognize their owners and open doors, parents can monitor their children at day-camp, and search engines that can match photographs from the web. According to Mashable, Ticketmaster parent Live Nation is investing in technology that can identify concert attendees in seconds. We are living in a world where technology knows our faces, whether we choose to interact with it or not. Depending on whom you ask, the future is ether full of possibility or utterly terrifying.

Is it private if the information is your face?

Can using facial recognition technology be a violation of privacy if the data it collects is your face? A frequent argument for allowing facial recognition is that we can’t expect privacy in public spaces. Getting deep into the law however, we need to realize that the truth is much more nuanced. While we should not expect as much privacy when in public spaces as at home, there are limits to privacy intrusions. This is evident in one of the latest rulings by the Supreme Court of Canada. As case commentators, Pam Hrick & Moira Aikenhead put it, “privacy in our bodies is fundamentally connected to human dignity and autonomy, and cannot be easily eroded.”

Facial recognition is another area where privacy and expectations are difficult to reconcile. Are our faces personal property, or public data? It depends on how that data is acquired and used, as well as the consent of and impact on the individual.

Far more invasive than photographs

A facial recognition database is also much more valuable to threat actors than a database of photographs. In order to work, facial recognition systems scan to extract mathematical measurements of the person in question. Apple’s FaceID, for example, includes depth in its measurements to geographically map someone’s face. Facial recognition matches these measurements that can be compared with previously stored data.

Unfortunately, facial data, like other biometric systems, is a target for hackers. The information is valuable: it can provide details about communities under surveillance, or be used as a key to access other systems.  As columnist Roger Grimes rightfully points out, most biometric identities can easily be stolen and reused.  Unlike resetting a password, short of expensive and unwanted surgery, individuals cannot change their faces. Sadly, such thefts have already hit recent technology adaptors. In China, an unlocked facial recognition database already exposed information on thousands of users.

Not as good as advertised

There are scores of other problems with facial recognition. Frequently, the technology has sexual and racial biases. Research studies by the Massachusetts Institute of Technology have found significant error rates for commercially released facial analysis software. A system that has a high level of accuracy for someone with Eurocentric ancestry may not be nearly as accurate recognizing people of African or Asian heritage. Companies, including IBM, are working to improve their systems with more diverse data sets, but how well they can remove bias won’t be clear without further studies. As John Smith, a research scientist of IBM attributed,

„Facial recognition technology should be fair and accurate. In order for the technology to advance it needs to be built on diverse training data.“

Even when the system is at its best, some degree of inaccuracy will occur. If the system is too strict in what it considers a ‘match’, it will miss minor changes including daily makeup, or small changes in expression. If the system isn’t strict, it will accept too many false faces as a ‘match’. While many have laughed over Facebook’s popular ten-year challenge, the meme of showing people at different ages highlights a critical point: people change when they age, and facial recognition systems must be able to adapt to that.

Facial recognition isn’t ‘coming’ to law enforcement: it’s already well in hand

Police and federal agent use of facial recognition technology comes as no surprise. Law enforcement has a long history of using biometric tools. Identifying felons through fingerprints has been in practice since before Sherlock Holmes. Arguably, facial recognition too has been in law enforcement for years. Mug shots, for example, are common police practice since the past few decades. New technologies, including Amazon’s Rekognition product, are simply speeding up the processing power. Instead of human eyes comparing photographs and video feeds, a computer is fulfilling the process, with access to significantly more data at colossal speeds.

To some extent, facial recognition technology is naturally a fit for police. It offers them a fast, efficient way of doing standard processes. However, we also must realize that the new technology isn’t the same as past practices. When fingerprinting and mug shots became permissible as evidence, we did not live in a world where everyone is in front of a camera all the time, instantly recognized. Privacy laws that provide exemptions to police use did not intend to be gateways into accepting surveillance states.

Balancing significant benefits against dangerous drawbacks

If we talk about facial recognition and law enforcement, we have to recognize that it *can* do great good. A notably example is the case of New Delhi, India. Testing the system between a lost child database and city video footage, The Times of India reports over 3,000 missing children were identified in four days. In Indiana, intelligence agencies have Vigilant Solutions software to assist cases, solving crimes including heavy thefts.

The temptation to use facial recognition for suppressing freedom however, is also high. In China, facial recognition systems can check you into your flight, hotel, or let you on the subway. Sadly, they are also used to control the population and restrict movement: according to the New York Times, they’re going as far as ’shaming’ jaywalkers and fast drivers at busy intersections. These are the potential situations that make privacy advocates, technology experts and developers speak out. In October 2018, Amazon faced internal and public backlash against the sale of its product to police forces. Thus far, however, they have refused to stop the practice: no law dictates restrictions on the technology’s sale.

Are there any ways to avoid face recognition if we’re not comfortable with it?

One question that crops up with facial recognition and identification is ‘do we have a choice’? The technology can be in force without our awareness or consent. How can individuals object to facial recognition scanning if they are not aware it is in use? Do we have the right to opt-out of the capture and use of our facial data?

Digital and civil rights organizations, including the Electronic Frontier Foundation (EFF), and the American Civil Liberties Union (ACLU), are pressing this point. EFF has testified before both the United States Senate and the House of Commons over concerns on facial recognition use. The ACLU supports stronger legislations to protect against the technology’s misuse.

Even with successful arguments on the dangers of facial recognition technology however, individuals may find it hard to be left alone. An example of this challenge was reported by the Independent, when a man was fined for refusing to allow his face to be scanned by Metropolitan police in London. When the individual covered his face to avoid the police scanners in the area, he was then approached by police.  The discussion became abusive; the fine presented for “penalty notice for disorder”. While the department asserts the fine was for the individual’s behaviour, they miss a critical point. It’s likely the police chose to interact with the individual *because* he chose to cover his face.

When interacting in public life means accepting facial recognition scanning or a police questioning, do we really have a choice? Is it possible for privacy to coexist in a surveillance state?

What laws are there for facial recognition?

So what legal rights to privacy do you have against facial recognition? At present, acceptable use of the technology varies widely, based on where you live and the purpose of its use. In the European Union, most facial recognition use falls under the General Data Protection Regulation. The GDPR classifies biometric data as sensitive data, requiring detailed legal rights for use and safeguards. The GDPR also includes legislation specifically against decisions made by automatic profiling, which may include facial recognition. However, the GDPR does have provisions for individual countries to establish legal exemptions, particularly for public safety.

In the United States, there is no federal law that restricts the use of facial recognition technology. Instead, use is highly dependant on state privacy laws. Notable states with laws that restrict facial recognition include:

  • Texas Biometric Privacy Act
  • Washington Biometric Privacy Law
  • Biometric Information Privacy Act of Illinois
  • California Consumer Privacy Act

Although these laws have seen challenges, at present they do have weight. In Illinois the Supreme Court backed the state’s biometric privacy law against the collection of a 14 year old’s fingerprints without parental approval. In January 2019, CNET reported a California judge ruled against using facial recognition to unlock a phone in Oakland. How this will weigh in against future situations remains to be seen, but the case establishes unauthorized use of biometrics as a violation of the fifth amendment. Privacy is still fighting.

In Canada, facial recognition rules fall under the Privacy Act (federal), PIPEDA (private sector) and provincial legislations.  As a result, facial recognition may be acceptable for federal purposes, but not for the private sector. For example, Border Services Officers collect biometric information from visitors planning on an extended stay. Meanwhile, in Calgary, the Chinook Centre mall is facing an investigation by the Alberta Privacy Commissioner for use of facial recognition to track shoppers’ demographics.

What else can we do?

Worried about the future? Listen, speak up, and support policies you agree with. While facial recognition technology is now out of the box, privacy awareness has grown significantly over the past few years. Companies do respond to peer-pressure, and lawmakers are paying attention. According to Bloomberg, tech giant Microsoft is one of those fighting, implementing ethical principles and urging governments to do the same.

Facial recognition technology, used for the right purpose, can do great things. For privacy to peacefully coexist however, we need better boundaries, and the sooner the better.  

„Futuristic“ By Digital Artist licensed under CC 2.0