Can viral videos inhibit police brutality?
The viral video that captured George Floyd's death sparked global outrage and huge protests around the world. In the wake of his killing, more footage of police violence surfaced as people took to the streets during the Black Lives Matter protests in clashes with law enforcement. At the same time, surveillance technologies have taken on a power of their own. States everywhere are trying to gather data about protesters and dissidents, using sophisticated tools of spyware and surveillance. Under the surface of the tragic events surrounding the Black Lives Matter movement and myriad sites of mass violence elsewhere, we are glimpsing a secret arms race. This is a contest between law enforcement and its supporting technology corporations on one hand and activists (in all their variety), investigative journalists, and justice-oriented tech experts on the other.
Hateful language and violent videos are vehicles for trauma as well as weapons of accountability.
Those of us who lived through the Cold War tend to think of arms races as contests between powerful states vying for military-technological supremacy. Increasingly, many of us now recognize that a new kind of digital arms race is happening in a shadowy underworld, this time between ambitious governments using hacking, spyware, and troll farms to gain advantage over others. But there is yet another digital contest happening, not in the world of intelligence agencies, but before our eyes as we watch videos of police brutality.
As a tool in this contest, videos depicting collective violence have their own violent power. Content moderation on social media platforms, for example, is very dangerous activity. Thousands working as content moderators for platforms like YouTube, Facebook, and Instagram experience high levels of stress and trauma from constantly witnessing violence and hate speech. The public experiences this too: Social media-posed footage of police brutality--the death of George Floyd being arguably the most widely seen video of this kind--has been an unintended cause of trauma for activists and members of the Black community, and anyone else whose sympathies and curiosity led them to view it. The same powerful emotions these videos evoke are key to the way they provoke collective indignation. Hateful language and violent videos are vehicles for trauma as well as weapons of accountability.
There are other tools being brought to bear in this contest as well. Last month, IBM, Amazon, and Microsoft each announced a halt to providing facial recognition services to the police, a development widely seen as “progress” in efforts to check abuses of digital data. However, in a recent article for The Intercept, Michael Kwet uncovers an astonishing array of big tech services still being developed for law enforcement, including Microsoft’s Domain Awareness System, built for the New York Police Department and now expanded to Singapore and Brazil. At the same time, Amazon’s Facial Recognition technology, Veritone IDentify, has been “supercharging” local police services in Oregon, where it was first adopted.
Given this information, at least some version of mass surveillance technology seems to be at work in the ongoing #BlackLivesMatter movement, which gained momentum with the rights abuses by unidentified federal forces in Portland and elsewhere. Whether it be Veritone’s facial recognition platform or some other surveillance tool yet to be revealed, the police are very likely watching and analyzing images of the people in the streets. These technologies operate beyond reach of accountability (the code on which they are based is proprietary). They use algorithms known to have built-in biases, that make more mistakes among so-called visible minorities because they better interpret white faces more akin to those used to program the systems. This leads to more frequent errors affecting those already the targets of disproportionate levels of policing, arrest, incarceration, and injustice.
At the same time, the protesters are watching back. Cities everywhere are saturated with smartphones, with some people recording and uploading video images to social media, others finding their way to major media outlets. Among the more compelling viral videos are those depicting the “moms” who, despite their obviously peaceful protests, can still be subjected to police brutality. Although reporters from flagship newspapers are present at the protests, most images are from ordinary people equipped with cell phones. Portland’s recent showdown with federal officers offers a good example of a global development: the raw data now being used to bring states to account is in the hands of ordinary people or “citizen witnesses.” Whereas in the past journalists vetted every cause that received public attention, these journalistic filters are now often bypassed, bringing images from the streets directly to social media platforms and from there to the world.
Digital witnessing constitutes the newest frontier in an information technology arms race. People are taking countless images, supported by an array of tech savvy analysts and developers dedicated to protecting them and their evidence. Meanwhile, those with the power to wield the tools of surveillance, censorship, and disinformation are, in fact, as much on the defensive as they are succeeding in their goals. This witnessing needs to followed by justice measures. Few police, for example, are prosecuted for their well-documented crimes. That may change. But for now, we cannot afford to be complacent about this contest or assume that these technologies of witnessing are somehow morally neutral. Accountability has to extend to the states and corporations that are using new technologies of surveillance and control. The power that we give the already powerful will ultimately decide how new technologies will be used, whether to protect rights or violate them.
Comments