Site icon Aragon Research

We Must Do Far More to Protect Privacy

Our expectations for privacy may change in the wake of coronavirus.

Facial recognition is one disruptive technology among many that will pose new challenges for privacy advocates.

by Ken Dulaney

Recent news has reported that the Ring division of Amazon is considering adding both a facial recognition camera and license plate recognition to its family of doorbell and outdoor cameras. What followed was the predictable outcry over the loss of privacy, and as a result, I doubt that Ring will add these features.

Amazon Ring, Facial Recognition, and Privacy During the Coronavirus

Yet the features Ring proposes to build into its offerings will certainly become available soon in aftermarket software products, if they are not already. Other articles indicate that one of the outcomes of coronavirus transmission will be an endemic loss of privacy through many other efforts (see slide 10 of 12 of Kiplinger’s analysis). There are likely to be even more cameras to watch citizen behavior. There will be thermal scanners at strategic points where groups gather (e.g., train stations, concerts, and maybe even businesses) to look for persons with fever or other maladies. Attacking Ring for adding these features is akin to playing a public game of “whack-a-mole” to protect individual privacy.

Privacy Laws Only Solve Part of the Problem

While privacy laws are attempting to control the use of personal information, they are primarily targeted at the collector. However, information departs the site at which it is collected at lightning speed and can be at many other sites in short order. So those who accept the terms of a website without reading the particulars (as most of us do), may find that when they return later to either cancel their website membership or alter their permissions, they have permanently lost their information.

General data protection regulation (GDPR), which has spread globally, protects only minimally. Furthermore, website ownership is rarely disclosed, with most thinking that if it’s written in any of the languages of Western democracies, it must be secure. The site owner may not be bound by any privacy laws, as was the case for TikTok in its early implementation.

Other systems may not collect data but can create accurate data by correlating data elements collected from other sites (e.g., while you may check “prefer not to answer” when a site asks your income level, the site can easily determine your likely income through your zip code, home price, and other factors). That data is then stored and can be disseminated without any interference from the individual it is tagged to.

And then there is the voluntary submission of information that most of us do for sites we trust (I am submitting medical information to a mobile phone app called Eureka that is studying coronavirus pandemic habits of citizens). Few of us know exactly what will be done with that data and how long that data will be retained. There are also many cases of both young and old people who post information that cannot be removed, which may then be available for researchers who may not share the information donor’s view of how the data should be used or retained. Should a young girl who posts a compromising picture of herself or makes a politically incorrect statement still deal with the social consequences of those acts when she runs for president at age 40?

Facial recognition is one disruptive technology among many that will pose new challenges for privacy advocates.

Improving Privacy—3 Ideas

While privacy demands through regulation are to be embraced and are effective where they can be, three other areas desperately need attention:

1) More laws that control the use of data—Whenever an individual becomes involved in a legal matter, there should be restrictions on what the legal community can bring to bear held up by US government laws and regulations. There should be time boundaries—say seven years—before which the data is inadmissible. Privacy data should never be used to force actions unrelated to a case (e.g., forcing an admission under the threat of exposing privacy data that the legal team may have collected).

2) Electronic leashes on data created and disseminated—Each piece of information needs to be supplemented by an encrypted tag that links back to its creator, and that can be used to remove data whenever the creator wishes, no matter how many times that data has been transferred to other sites. Digital rights management has done some of this, but it really is up to the content creation applications and services to better implement these features.

3) Aging of privacy information—Most corporations today remove emails after a period of about 3 years as a best practice. This prevents them from being embroiled in endless legal matters since lawyers searching for information for one case can unearth other issues. This practice should be extended to individuals.

Bottom Line

The backlash against Ring’s proposal is indicative of the broader backlash that we will soon see emerging around privacy issues like facial recognition. Inevitably, products will incorporate these features. In the meantime, consumers, enterprises, and governments will need to decide how to respond. Preserving privacy in the face of new technologies like the “find my face”-style facial recognition algorithm that Ring considered requires us to understand both the changing technology landscape and our social response to it.

Exit mobile version