PARIS — It’s been a tumultuous few months for so-called “surveillance tech.”
Most recently, following pushback from Black Lives Matter activists, Amazon has suspended police use of its facial recognition software for one year. IBM followed suit, announcing it will stop offering its similar software for “mass surveillance or racial profiling.” The moves from the tech giants is a step, small as it may be, in the right direction. Yet this also comes amid calls during the pandemic to turn to such technology to ensure public cooperation to stem the spread of the deadly COVID-19 virus.
Despite the potential medical benefits, the use of geolocation technology to curb the coronavirus has raised concerns over fundamental data protection, especially in countries like China, South Korea and Israel where tracking has been more intrusive: enlisting credit card records for purchase patterns, GPS data for travel patterns, and security-camera footage for verification.
In Russia, the pandemic proved a convenient excuse to test a nascent, China-inspired citizen monitoring system, backed by a Moscow court ruling in early March stating that the city’s facial recognition system does not violate the privacy of its citizens. Even places not particularly known for their police state-like tactics are pushing limits: In Paris, cameras were installed at the popular Châtelet metro station to monitor mask use, as it is illegal to take public transportation without a mask.
[rebelmouse-image 27070373 original_size=”744×500″ expand=1]
Photo: Lianhao Qu
Similar (and seemingly well-intentioned) efforts like fast-tracked coronavirus data collection apps have raised suspicions of data protection breaches by both hackers and governments, including in the Netherlands and South Africa. In Germany, a country known for its hard stance on privacy protection, new surveillance tools are being met with a considerable amount of defiance. An article in Die Welt asks: “How can you defend yourself against facial recognition?”, questioning not only the reliability of the recognition gear and software, but also its growing availability to private companies.
The increased attention during pandemic times has now multiplied during the social unrest that followed the police killing of unarmed Black man George Floyd in Minneapolis. If companies and governments rushed to implement face-scanning systems to track the movements of COVID-19 patients, what prevents them from exploiting the same tech to gather data from Black Lives Matter protesters?
Recognition software is significantly more likely to misidentify darker-skinned people than lighter-skinned.
Big Brother, it turns out, has racist tendencies. But in a fight-fire-with-fire sort of way, technology itself may actually help us steer away from the slippery slope of profiling. At the Massachusetts Institute of Technology, Joy Buolamwini, a researcher nicknamed the “poet of code,” created the Algorithmic Justice League, aimed at producing more inclusive and ethical technology. Through her research, Buolamwini found that recognition software is significantly more likely to misidentify darker-skinned people than lighter-skinned — conclusions that could drive calls to review the technological bias.
Meanwhile, U.S.-based Data 4 Black Lives, a movement to counter historically racist uses of big data, posits that “Tools like statistical modeling, data visualization, and crowd-sourcing, in the right hands, are powerful instruments for fighting bias, building progressive movements, and promoting civic engagement.” Whether it’s about the selective collection of data or what happens with the data gathered, surveillance tech companies should know that they’re being watched too.