The ongoing pandemic has significantly changed the landscape of digital rights in countries around the world. The collateral damage of the health pandemic has been the weakening of the immune system of democracy – freedom of expression and free exchange of information. Digital rights are not just human rights in themselves, but also form the bedrock on which civil society rests to rally for other human rights. At this stage, it is critical to evaluate the direction in which digital rights are headed.
Digital rights are facing different challenges in different parts of the world but there are three pressing issues which stand out and urgently need our attention.
- Online censorship on social media platforms caused by automated content filtering.
- The use of facial recognition technologies and mass surveillance by governments which causes a chilling effect on free speech.
- Illegal internet shutdowns to curb political dissent.
Automated content filtering: Algorithms as the arbiters of truth
An unintended consequence of the COVID-19 pandemic has been the increased reliance of tech giants on content-filtering algorithms for online content moderation. Social media companies facing a shortage of its human workforce due to social distancing and work-from-home are now increasingly relying upon algorithms that are incapable of understanding context. Automated content filtering is susceptible to flagging false positives and disallowing sensitive speech that does not necessarily pose a clear and present danger of inciting imminent physical harm. Thus, imaginably more and more content is being taken down now more than ever, including some legitimate content.
Algorithm-based content filtering is becoming the accepted way of preventing hate-speech, copyright infringements, misinformation and other online harms.
While the intended goal of preventing hate speech or copyright infringement may be acceptable, the means of algorithmic filtering leads to prior-censorship and is disproportionate, highly restrictive and susceptible to false positives.
During the COVID-19 pandemic, both Twitter and Facebook have taken down posts by Brazilian President Jair Bolsonaro where he falsely claimed that an anti-malarial drug is a cure for the coronavirus. This takedown was in contravention of their general hands-off approach to moderating statements made by political leaders. While technology companies may justify taking action against prominent figures who are abusing their platforms during the pandemic, this is not a scalable strategy for combating misinformation and disinformation during the pandemic or more generally.
Both at the national and international level, actors are enabling modes of content moderation that leads to automated content filtering and inevitably causes a chilling effect. An example of this is the European Union’s recent Copyright Regulation, which forces content sharing service providers to use upload filters to avoid the slightest chance of copyright infringement. Additionally, the NetzDG Act in Germany imposes a fine of millions of dollars for not removing patently illegal content within 24 hours. India, which enjoys the biggest user base for Facebook, Whatsapp and Twitter, is also making it mandatory for intermediaries to use automated filtering.
As a result of this, it is possible (and likely) that social media platforms will engage in monitoring and banning of content to avoid liability. Not only that, there is ample research showcasing that automated content filtering mechanisms cast a wide net of censorship and also boast a proven history for false positives. For instance, YouTube’s Content ID banned a son from sharing a video of his father’s funeral because of the songs that were playing in the background. In another incident, five copyright infringement notices were sent for a white noise video. The possibilities of such false positives would be heightened significantly if the potentially infringing video was not allowed to be uploaded at all.
Such requirements for content moderation online weaken dissent and discourse and strengthen the control of the big-tech over the common man’s free speech.
Digital rights scholars need to collaborate with intermediaries to request access to the typology of content that automated content filtering flags and takes down. At least some intermediaries may cooperate because, by their admission, they are not too pleased with such a burdensome law.
Facial Recognition: Self-censorship when the Government is watching
Mass surveillance is moving past the age of fingerprints and call records monitoring into the era of facial recognition. The lack of legislation on these issues has made biometric data volatile in the hands of both state and private parties. It has led to disproportionate use by the state at protest sites and by intermediaries for voter manipulation, to name a few.
Recently, we witnessed a historic settlement of $550 million in the USA by Facebook on the issue of unlawful facial recognition. However, the lack of universal standards and legislation in many countries has left a vacuum in regulating surveillance. The State, along with the Big Tech companies, is quick to capitalise on this vacuum for their own political and economic benefits. As a result, people across the world have suffered at the hand of facial recognition systems which have minimal accuracy.
Thus, there is an urgent need to examine whether facial recognition needs to be permitted, regulated or banned entirely. There is a need to start a citizen engagement based on stakeholder-engagements and strategic meetings to determine where facial recognition goes.
Internet Shutdowns: Cutting off society’s lifeline to curb misinformation (and dissent)
A digitally interconnected world may be a myth for countries and regions that regularly witness targeted internet shutdowns.
Professor David Kaye, the UN Special Rapporteur on Freedom of Opinion and Expression, describes internet shutdowns as “communications siege” and “common tactics by the government to silence dissent.”
Over the past year, South-Asia has been the global hotbed for internet shutdowns. Unfortunately, with every progressing year, a new record is created for the most protracted shutdown in the world. The Indian region of Kashmir recently saw 137 days of complete internet shutdown and over 90 days of partial internet shutdown without access to any news channel. Not much is different in other parts of the world. Zimbabwe has suffered both economically and socially at the hand of a year-long internet shutdown.
Similarly, the Middle East and North Africa have taken the biggest economic hit from internet shutdowns in 2019, with costs exceeding $3 billion.
Governments often shut down the internet in times of societal unrest or during protests at the pretext of curbing misinformation and social unrest. However, it is crucial to carry out a balancing act to identify a proportionate means of controlling misinformation that is not as extreme and harmful for free speech as internet shutdowns. Additionally, if shutting the internet down is the only way forward, there ought to be an established due process of law, procedural safeguards and judicial oversight.
History is a testament that democracies, which thrive on a culture of exchange of ideas and information, have performed much better in times of crisis.
Given the pandemic that the world is currently struggling with, the protection and promotion of digital rights are vital to a vibrant democracy where there exists a robust human rights framework, free exchange of ideas, transparency in administration and a system of checks and balances that leads to public accountability.