NYT's Kashmir Hill, and learning to live with facial recognition
In Conversation at Duke University
(If you are new here, I am a visiting scholar at Duke University this year and I am hosting occasional email dialogs on important issues of technology, ethics, and privacy called “In Conversation.” Here’s a link to an earlier “In Conversation” about contact tracing apps.)
Introduction
“Privacy tends to lose when there is a clear public safety use case.” That means firms like Clearview AI, and everything else like it, is here to stay. Now what? That’s today’s “In Conversation.”
Earlier this year, Kashmir Hill of The New York Times broke one of the most important privacy stories in recent times — it was about a startup named Clearview AI that seemed to have achieved the Holy Grail of privacy invasions. Starting only with a face, the firm claimed it could identify most people, and instantly provide a dossier with dozens, or hundreds, of other images of that person. Already, the firm had an impressive list of law enforcement clients, she reported.
For a decade or more, Hill wrote, experts had been worried about a firm that could figure out how to de-anonymize faces. Turns out, it wasn’t all that hard. The project just needed a brash programmer who was willing to scrape the entire Web for pictures and didn’t care what toes were stepped on.
An expected backlash ensured. Clearview AI was banned by Facebook, sued by consumers, told it broke Illinois law. It was called a “nightmare scenario.” Still, nearly a year later, the firm is undeterred. While initially suggesting the tool could be used for yucky surveillance capitalism purposes like sales leads and hotel greetings, it has since pivoted to a safe place for privacy-invading technologies — protecting children.
During a virtual discussion of privacy journalism at Duke University’s American Grand Strategy program with Prof. David Hoffman, Hill predicted that — despite the backlash — Clearview AI has a clear path through its rough coming-out party. And so do many technologies like it. Click play above to watch the talk or click here to visit YouTube.
“There are obvious abuses of technology like this. Imagine...being a mother who walks around with young children … somebody could just take a photo of me...and then know who I am, pull up my address, know where I live ... walking into a bar having somebody who is attracted to you doing the same thing,” Hill said. “But privacy tends to lose when there is a clear public safety use case...I’ve talked to officers who have used it and they've used it to solve some terrible, horrible crimes. Child abuse. Child sex exploitation. Officers say this has been a revolutionary tool for them…they didn't have anything like this before.
“Privacy often loses when it goes into that fight,” Hill added.
It’s impossible to argue against using any tool available to stop a child from being exploited. Law enforcement has long been an early adopter of such tools, dating back to early chat room sting operations. Usually, initial excitement over the effectiveness of these new policing technologies ultimately gives way to a more muddled reality.
It’s sometimes irresistible for cops to avoid case creep, and some can't resist the temptation to use these advanced techniques to solve other kinds of crimes. Should law enforcement lurk in chat rooms where small-time marijuana deals are discussed, or use face recognition to track down shoplifters? What it the technology is … imperfect, and the wrong people get arrested? (Hill has written about that, too.)
Clearview is going to try to toe the line between creepy capitalism and law enforcement, Hill said, and it will probably succeed because the tool is just too useful and the cause seems too just. OK. Then it’s up to the rest of us to figure out how to put guardrails around their use by law enforcement to make sure they aren’t discriminatory, or arbitrary, or they just lead to a vision of America that no one wants.
From: Bob
To: David
CC: Jolynn, Ken, Kashmir
Kashmir said. "the nice thing about being a journalist is you just have to report on these things, you don’t have to come up with solutions." I'm with her =) But still, someone has to try. I think Kashmir made a compelling argument that we have to learn to live in a world with Clearview AI's... so, how?. David suggested use restrictions and accountability during the talk. What would that look like in the real world? How would you prevent mission creep?
From: Jolynn
To: Bob, David, Ken
CC: Kashmir
Bob, Thanks for your comments and special thanks to Kashmir Hill for speaking with David at Duke about her important recent work on Clearview as well as her career in journalism and privacy reporting. Hill’s recent coverage does indeed raise many challenging questions regarding privacy, civil rights, and surveillance.
First, I would posit that we should not necessarily have to learn to live with Clearview AI. While I realize stopping tech in its tracks is unlikely (and disfavored by pro-innovation folks as well as law enforcement officials who have found facial recognition technology useful), highly respected privacy scholars, Woodrow Hartzog and Evan Selinger, have recommended that facial recognition technology should be banned. In addition, several cities, including Oakland, San Francisco, Boston, and Somerville, MA, have concluded that the threats posed by the use of facial recognition technologies warrant moratoriums on its use. In the past year, Amazon, Microsoft and IBM have all elected to impose moratoriums on the sale of facial recognition technologies to law enforcement as well. At a minimum, facial recognition should not be used in conjunction with police body cameras in real time because of the documented biases of facial recognition algorithms and inaccuracies that Hill discussed on Thursday. In addition, facial recognition should not be used on peaceful protesters as a rule because of the likelihood that such action will chill protected speech.
Short of a ban, however, transparency, accountability, publicly-available guidelines, and oversight should be absolutely non-negotiable when facial recognition and similar privacy-corrosive technologies are used by law enforcement. Police departments have been using facial recognition technologies relatively under the radar and without oversight for a number of years. Recent reporting by Hill and others suggests that law enforcement agencies have specifically been using Clearview’s facial recognition technology without public scrutiny. Even when policies are nominally in place, the requisite oversight does not exist to ensure compliance and accountability. For example, WRAL’s Tyler Dukes reported that in August 2019, the Raleigh Police Department paid Clearview $2500 to allow three employees to use the service for one year. After receiving questions from WRAL and reviewing departmental policies, the RPD decided to end the use of the service in February 2020. Subsequently however, the RPD discovered that additional officers had been contacted by the company directly and had also used Clearview. Ultimately, the department stated that it just didn’t know how many officers had used the technology. The Raleigh Police Department’s use of the technology did not appear to conform to its own internal policies. If the public is unaware of law enforcement’s use and misuse of controversial surveillance technologies, informed debate is impossible. Fortunately, diligent, informed journalists and organizations like the Georgetown Law Center on Privacy and Technology and the Electronic Frontier Foundation have not flagged in their efforts to bring public attention to these issues.
These surveillance technologies can pose substantial threats to civil liberties in the hands of law enforcement, but also pose broader threats to civil rights, privacy expectations and practical obscurity. Emerging surveillance technologies often originate with the military and then filter down to federal and local law enforcement, and ultimately, in some cases, to more general use in society. Automated license plate readers (ALPR), a technology now used by homeowner’s associations, and drones, available at Walmart and Target, are a couple of examples. Flock Safety, an Atlanta-based company that sells ALPR tech to neighborhood groups, and Amazon Ring, which has partnered with police departments across the country to help distribute its Ring Doorbells, are companies that enable privately recorded surveillance footage and help make that footage available to law enforcement. Both these partnerships between private companies and law enforcement and the growing rate of personal use of surveillance technologies create ethical questions for communities and a need for regulation and transparency. Widespread, unregulated use of surveillance technologies threatens the privacy and civil rights of citizens, particularly communities of color, Black Americans, immigrant communities and people involved in protests.
The considered beliefs of law enforcement officials that facial recognition and other surveillance technologies enable them to fight crime more effectively are clearly important. But law enforcement should be one voice in a larger conversation that involves computer scientists, data scientists, civil rights advocates, ethicists, attorneys, and our elected representatives, among others. Privacy issues often implicate multiple and differing interests, and often those interests appear to conflict. Here, even assuming conflicting interests, I have no doubt it is possible to enable effective policing while also ensuring civil liberties are protected: policy makers must recognize the risks and take steps to both guide the use — and constrain the misuse and abuse — of emerging technologies.
From: David
To: Bob, Jolynn, Ken
CC: Kashmir
I am in full agreement with everything Jolynn mentions below. A few additional quick thoughts:
The goal should be that privacy doesn’t have to go into a fight with child protection or other law enforcement priorities. Privacy vs. Security is a false dichotomy. As Jolynn notes below the bigger problem we have is a lack of trustworthiness of organizations that use facial recognition technology. If we do not have effective oversight and accountability mechanisms of these organizations, then any effective law enforcement tool can be abused. A comment I made during the discussion with Kashmir that facial recognition technology has problems when it is highly accurate and when it is not accurate enough. When it is highly accurate we need to worry about disproportionate use of the technology on historically disenfranchised communities, especially black Americans. When the technology is not accurate enough, we need to have the ability to interrogate the technology to have transparency into why the technology is providing inaccurate results, and whether those inaccurate results are disproportionately impacting certain communities. I am not a fan of banning technologies, and we should first explore whether we can provide accountability of the organizations that will use the technology, and confidence that the technology is working as it is intended to. I do think we should follow Kashmir’s lead and look at the technology suppliers to these unaccountable organizations and ask what level of responsibility companies like Clearview AI (and their suppliers, investors, collaborators) should have to make certain their customers are accountable organizations that demonstrate a respect for fundamental human rights.
There has been tremendous academic and policy work done on the fair information practice of accountability, but I still see only limited implementation of the concept in many organizations that design and implement technology. Marty Abrams’ organization the Information Accountability Foundation has provided best practices for how organizations should deploy policies, people and processes to demonstrate they can process personal data responsibly. Sadly, too many organizations implement the bare minimum and only develop accountability programs to the degree they are required by the law (and to the degree that law is enforced). Net – I believe we will not have the structure we want until we have laws that demand a high level of accountability and that enforcement organizations like the Federal Trade Commission and State Attorneys General have adequate resources to broadly enforce the law.
From: Bob
To: Jolynn, David, Ken
CC: Kashmir
Let me push back a bit on the inevitability of all this. I'm all for accountability, and for catching terrible criminals, and I know we can't pretend tech like Clearview AI can be somehow shoved back into the genie's bottle. But I don't think we can rush past this: it's pretty easy to make the argument that many law enforcement agencies have not earned the trust required for access to even more powerful investigative tech tools. Google "cops" and "ex-girlfriends" and that hits you in the face. These are not isolated incidents. There's been widespread, documented abuse of the NCIC crime database for a long time. Cops run plates on women they find attractive. Imagine what might happen with facial recognition tools and other really invasive technologies. These abuses are already illegal, yet when the Associated Press conducted a nationwide investigation, journalists found punishment was spotty. Shouldn't we make sure we have real, effective accountability in place for technologies we already use before we spend money on tools with even more grave potential for abuse?
From: Ken
To: Bob, Jolynn, David
CC: Kashmir
Interesting give and take. What comes to mind is the tensions over transparency of government information. Some laws (i.e. FOIA, etc.) require it. Other loopholes either prevent it (generally with the nat’l security rationale) or give agencies leverage to limit it. But the interpretation of when information must be shared or can be withheld changes with time, legal decisions, individuals (what defines a “public figure” for example), and makes for a never-ending headache for those watching.
I agree with David that a lot of work has been done on this, and also agree that those who are advocating for change (greater transparency) have not seen a lot happen.
There are two levels here to me: 1) privacy/security of the information that is being stored and used and 2) transparency in the processes by which the information is accessed and utilized. Do citizen police oversight boards ever address things like this? (I know there is a BIG concern about how effective these boards really are…). Any examples at the very local level where there has been success in reigning in some of these abuses?
From: David
To: Bob, Jolynn, Ken
CC: Kashmir
Just want to echo Ken’s point on the transparency of processes. This is something that is critically important for large complex systems including national security surveillance. While transparency of individual decisions may be impracticable (too many transactions to review) or inappropriate (they may reveal sources and methods of intelligence collection) substantial transparency around the processes that are put in place can greatly reduce privacy risks.
From: Jolynn
To: Bob, David, Ken
CC: Kashmir
I agree with Bob on the accountability point and think the moratoriums on use and sale that several cities and companies have implemented are a useful attempt to hit the pause button while these accountability issues are sorted.
I also agree that discipline, including long-term suspension and termination, should be standard operating procedure for privacy abuses in law enforcement — misuse of personal data and misuse of surveillance technologies. It seems reasonable to require mandatory criminal prosecution for certain categories of data abuse - for example, all abuses of data or surveillance technologies that are related to domestic violence (current or ex-partners), gender-related violence, or stalking. Given the persistence and pervasiveness of problems reported in the press, it may be that reporting requirements and oversight by a non-law enforcement body (that is also not answerable to a law enforcement body) should become part of the systematic handling of data and surveillance abuses that occur in law enforcement agencies.
Accountability in the area of discipline for illegal conduct has been a controversial issue for police departments in the context of excessive use of force and racist conduct. https://www.nytimes.com/2020/05/30/us/derek-chauvin-george-floyd.html Numerous entities have produced policy recommendations on ways to promote accountability in these contexts. See, https://fas.org/sgp/crs/misc/IF11572.pdf. Similar recommendations could be applied in the data abuse context as well.
Demonstrating transparent processes and accountability are within the purview of police departments. It would not be unreasonable to require a showing of accountability to an independent third party — that data and surveillance abuses are dealt with swiftly and effectively — as a condition of access to and use of technologies that pose privacy and security risks to citizens.
From: Kashmir Hill
To: Bob, Ken, David, Jolynn
This has been a fascinating discussion. I don't have much to add, beyond noting that it seemed to me back in January and February that there was going to be a robust debate about Clearview among lawmakers and regulators that might lead to a real policy response. But then the pandemic happened and it seems so far that, beyond fights in a few jurisdictions such as Illinois, Vermont and abroad, this might be another time of inaction.