Opinion: This high-tech way to try to slow the coronavirus spread has low-tech problems
We must guard against surveillance opportunists who will endanger public health and the health of our democracy.
We all know now how badly underequipped America was to fight a public health battle on the scale of COVID-19. But as we struggle to catch up with countries that have successfully pushed back on the outbreak, we start to see new dangers in the weapons we deploy to fight this disease, including some of those used abroad.
As federal, state and local governments increasingly contemplate big tech and mass surveillance as a tool to combat the spread of the deadly virus, we must guard against surveillance opportunists who will endanger public health and the health of our democracy. For some Americans, the consequences of expanded data collection could be as deadly as the disease itself.
As we’ve seen in China, Taiwan and South Korea, every facet of modern life can become a tool for tracking the virus’s spread. Whether it’s the government using cellphone tower data to track the movement of travelers from Wuhan to other parts of China, or pushing for using new apps that predict if users have been exposed to the disease, or gathering information from social media to map where users are posting from, our digital lives are becoming medical diagnostic tools.
As much as this surveillance might seem like a smart way to fight the pandemic, these programs can get it wrong. There is a profound risk that these types of artificial intelligence systems will mirror the prejudices of their human designers, falsely targeting Asian Americans and other marginalized groups. There is also the risk that they drive many of those who have been infected into the shadows, worsening the spread. And once the period of contagion is over, these emergency surveillance tools may easily be co-opted for other purposes - everything from tracking graffitiing to tax evasion -- making Orwellian surveillance a permanent part of American life.
Perhaps the most high-profile public health tech tool that’s actually been put in place to deal with the coronavirus, rather than merely discussed, is the partnership (albeit fraught) between the Trump administration and Google to create a screening triage website to determine whose symptoms, travel history, and other risk factors mean that they should be prioritized for treatment. Users seeking tests at participating facilities log in with their google account, enter their health data, then get a referral to COVID-19 testing if they are deemed a priority.
Making Google part of the national emergency response (long before it was agreed to by Google), caused privacy advocates to ask what would happen with the data. The law is completely unclear on whether this data can also be used by government agencies ranging from public health authorities to Immigration and Customs Enforcement.
Furthermore, if potential patients have to register with a Google account using their real name, it could dissuade certain groups of individuals from getting screened. Take a moment to imagine what it is like for undocumented immigrants living through the coronavirus crisis. For those who have the symptoms of COVID-19, a trip to the emergency room could bring a death sentence: deportation to a far-off country even less equipped to handle the threat of the pandemic. If even a small fraction of undocumented immigrants feel unsafe getting medical treatment, the virus could expand.
Similarly, those Americans who have outstanding police warrants may also be dissuaded from handing their information to public-private partnerships. And some Americans will avoid registration on ideological grounds to avoid giving corporate entities or the government their intimate health details.
The potential for far-reaching consequences from faulty technology is also greatly enhanced by using surveillance widely to cope with this pandemic. For instance, it’s not that far out to imagine government officials using current tracking software such as HealthMap (which scours social media sites for flu-related words to identify incipient flu outbreaks) or Flu Near You (which asks its users to self-identify their flulike symptoms) to impose quarantines or otherwise restrict people’s movements; local governments in Chicago and New York have relied on similar programs that scraped people’s social media for terms related to foodborne illnesses in order to identify and shut down restaurants prone to food poisoning.
But despite successes using these apps for food poisoning and the flu, the effectiveness of this sort of mass surveillance system is decidedly unclear, especially if expanded more broadly by relying on cellphone locations and internet history. Previously, systems might have been able to guess who had seasonal flu based on their Google queries, for example, but in the midst of this pandemic nearly every American is running these same searches. Other attempts to develop this technology, such as Google Flu Trends, were abandoned as failures.
Moreover, using artificial intelligence to determine who can leave their home or take transit raises the risk of AI bias. In the U.S. (and particularly New York City), where housing is appallingly segregated, it’s easy to imagine how AI could lead to a form of COVID-19 redlining or otherwise replicate the worst shortcomings of “predictive policing,” which often draws on racially biased crime data to recommend even more racially biased policing.
Ultimately, there’s the threat this technology poses to our civil rights and the rule of law. Government access to this type of tracking and personal data means officials will have the power to exclude people from society, effectively subjecting them to home confinement without trial, appeal or any semblance of due process. It’s an appealing response when the government gets that decision right, but a chilling power if abused.
In China, residents have been forced to install phone apps that track their movements and assign them a red, green or yellow coronavirus score. Get a bad score and suddenly public transportation, work and school are out of bounds. And, as people in China are learning, when a computer program quarantines you, that automated judgment can be impossible to challenge and reverse. Disturbingly, there’s growing evidence that the expanded behavioral tracking will stick around long after the crisis is over, giving Beijing a new way to track religious minorities and political dissidents.
In the weeks ahead, we must be vigilant. Whether it was the internment of Japanese Americans during World War II, the profiling of Muslim Americans following the terror attacks of Sept. 11, 2001, or the mass incarceration for petty and nonviolent crimes when crime rates spiked in the ’80s and ’90s, our rights are most at risk when we feel scared. And the changes we accept in times of crisis can last far longer than the immediate crisis.
In the weeks following 9/11, Congress hastily expanded surveillance powers through the USA PATRIOT Act. Many of those emergency provisions were originally supposed to expire more than a decade ago. This week, Congress voted to renew them yet again. Taking evidence-based steps to protect public health will save lives in the coming days, but any damage we do to our Constitution may not heal for decades.
Quote of the Day
Lend your friend $20, if he doesn’t pay you back then he’s not your friend. Money well spent.