#FreedomNotAvailable: Voices of Artists, Journalists and Protesters Under Threat in the Digital Space

By ARTICLE 19 Mexico and Central America

In a context of already widespread violence to silence the press in the physical environment, Mexico’s journalists now face additional pressure, threats and intimidation forcing them to delete digitally-shared content. Content removal[1] has the potential to silence essential expressions in a democratic society. Voices of artists, journalists and protesters face the risk of being eliminated and illegally erased from the digital space without the guarantees of due process.

The “Defending Freedom of Expression on the Internet: Transparency and Due Process of Online Censorship through Content Removal” project supported by Indela has brought this reality to light and shown the negative effects it has on freedom of expression and the right to information.

The various activities carried out as part of this project have helped some actors of the State publicly acknowledge the commitment to take on higher levels of transparency regarding removal requests made to social media platforms by institutions of the Mexican government. It has also contributed to a demand for social media platforms to include data about which State institutions are making the requests, the type of information they are asking to have removed and the reasons these requests are made, in transparency reports.

Through the #FreedomNotAvailable and #NeitherCensorshipNorPadlocks campaigns, we, along with other organizations, have managed to bring into the public agenda the need to defend freedom of expression from extrajudicial mechanisms like “notice and takedown” or copyright claims to remove content on social media platforms, web pages and web hosts.

Through a partnership with Harvard University’s Cyberlaw Clinic, we developed the white paper Access Denied: How Journalists and Civil Society Can Respond to Content Takedown Notices[2] which describes the impact of the DMCA (Digital Millennium Copyright Act) on journalism and the work of civil society organizations in Latin America. The Karisma Foundation (Colombia), Intervozes (Brazil) and Espacio Público (Venezuela) participated in the creation of this guide.

The results of this project have also allowed ARTICLE 19 to make tools available to civil society to respond to requests for content removal by social network platforms through a series of informative guides: (i) Content removal guide on Twitter policies; (ii) Content removal guide on Facebook community standards; (iii) Introduction to content removal and the (iv) Google content removal guide[3].

The impact of this project must be viewed in light of the #FreedomNotAvailable: Censorship and Content Removal in Mexico[4] report, which reviews the various mechanisms used in the country to remove online content and interfere in the right to freedom of expression and information of journalists and all users of technology.

The report explores how content removal undermines the press and the flow of information and manifests itself through: 1) content moderation policies on digital platforms, which are incompatible with the human right to freedom of expression; 2) threats and harassment of journalists to remove information from their spaces or digital profiles, and 3) content removal requests made to digital platforms—under ambiguous legal assumptions and without following due process or complying with judicial guarantees. It also describes the relationship between various institutions of the Mexican state and digital platforms to ask them to remove or restrict access to content. In this scenario, an information gap prevails, as does a lack of clarity about the legal basis giving the authorities the necessary power to request the removal of online content.

According to transparency reports from Twitter, Facebook and Google, between 2017 and 2020 Mexican authorities made 38 thousand requests for content removal. However, through transparency requests, reporting parties reported only 1697 requests for removal during that same period. With this, we have identified inconsistencies in the information provided by authorities, so we only know 1 in 10 requests made by the Mexican State to digital platforms. That is, for 95.6% of content removal clauses, we have no information, transparency, or accountability.

Indela’s support has been crucial for presenting the results of the investigation and sharing tools to deal with the abuses existing for content removal. Their support has also been vital for the Mexican State, digital platforms and other actors to take on greater commitments to transparency and accountability, as well as protecting the right to freedom of expression and access to information in the digital space.


[1] Content removal is the practice of deleting or restricting the circulation of information online, making use of legal frameworks and private mechanisms that limit its access. It is used illegally and irresponsibly to censor information of public interest that should circulate and remain accessible.

[2]Available at the following link: https://articulo19.org/reclamos-de-derechos-de-autor-son-utilizados-para-eliminar-contenidos-periodisticos-y-de-activistas-en-america-latina/

[3] The four guides are available at: https://seguridadintegral.articulo19.org

[4] Report available at: https://articulo19.org/libertadnodisponible/

Routes of Assistance for Online Gender-Based Violence, After the Law

By Hiperderecho Since 2018, four types of online gender-based violence (OGBV) have been recognized as crimes in Peru under Legislative Decree 1410, including sexual harassment and the distribution of private images without consent. In this context, we set out to determine the available and suitable routes of assistance in the country’s justice system to deal with cases of OGBV and the conditions necessary for complaints to advance through these routes. In addition, we called on five people who, with great generosity and courage, allowed us to follow them through their reporting process and, along with them, discover what happens when a person seeks to report OGBV in Peru.

A year after implementing the project, we discovered that the process of seeking justice for cases of OGBV is not a straight line. Instead, it is a challenging, exhausting, and diverse process in which, despite the laws, routes usually lack transparency and justice is different for each person. Understanding this, we went beyond identifying the applicable regulations and focused on listening and acknowledging the stories of struggle, the perspectives, and the needs of those experiencing this violence. We discovered that women and LGBTQ people who have experienced OGBV face a series of gender, information, socioeconomic, racial, and digital barriers that prevent them from having equal access to an effective complaint process and justice[1].

One of the first achievements of the project was bringing to light the reality of little access to justice. We carried out social network campaigns with strategies like the Mass Tweet using #IReportedOGBV, which trended in Peru thanks to the participation of feminist organizations, young students and people who have experienced OGBV in the country and the region. Through these dialogs, we confirmed that the impunity around all forms of gender-based violence, including online, still exists. Today, we know that OGBV is still normalized, that the justice system delays and fails to respond to the new challenges it presents, and that it is sisterhood and feminist support that acts as the pillar keeping us afloat. We are excited that the strategies being co-created in digital feminist spaces seek precisely to recognize OGBV as real violence and to take care of women and LGBTQ people who face a series of prejudices and obstacles when reporting their cases.

A second achievement was proposing and sharing methodologies to investigate OGBV from a feminist viewpoint and performing a critical analysis that lets us propose specific changes. We have put forth feminist legal methodology as a political and research tool to identify violence, negligence or missing components in the justice system that disproportionately affect women and LGBTQ people experiencing OGBV. This methodology also allows us to mainstream a gender-based approach in understanding the Law and the justice system in which the complaints are handled. By making this proposal visible, we also generated new partnerships with regional organizations, state operators, activist networks and attorney groups; that is, we are starting to collectively build a gender justice model to address cases of online violence that puts the needs of those who experience digital violence above the barriers and prejudices that exist in the justice systems of the country and the region.

Our third achievement was developing a support strategy for investigators and for people who have experienced cases of OGBV, which we call “strategic support.” Through this type of support, we provide legal advice, support in digital security and psycho-legal support to cope with the emotional burden of the complaint. In addition, we advocate for the co-creation of safe spaces for those who face violence to share doubts, feelings, desires, and expectations. As an organization today, we are backing and promoting this proposal as a practical and fundamental strategy in the search for gender justice. To do so, we have developed three practical informational workshops directed at attorneys, social workers, support persons and activists, at which we share all the logistical and emotional resources we use to provide empathetic support and discuss the challenges of providing this support. In addition, we have produced a collaborative self-care guide[2], from the stories of the people who participated in one of the workshops, to create more holistic recommendations from and for people who provide support.

Finally, we highlight a fourth achievement that sometimes goes unnoticed: we managed to bring ourselves together as a research and support team. At the beginning of the project, we did not anticipate or prepare ourselves for the emotional exhaustion of practices such as reading about violence on a daily basis, organizing testimonies discussing the cases, providing support during and outside of working hours, and generating bonds of trust in the team. These were all very emotional processes that changed us as a team and as an organization. Along the way, we questioned ourselves, we got overwhelmed and we hugged each other. Therefore, we also used project resources to receive group psychological support[3], receive holistic healing therapy and, above all, to document what we learned[4]. By sharing this experience with our colleagues in the region, we discovered that we were not the only team that had felt this way, and we reaffirmed that the resources we have must be used to take care of ourselves as well. Only then will be we able take care of others.

Our next step in the “After the Law: Seeking Gender-Sensitive Justice for Women and LGBTIQ+ People Facing Gender-Based Violence Online” project, supported by Indela and the Tinker Foundation, is to keep working from a feminist viewpoint to learn, collaborate and develop proposals, not alone but collectively, that bring us increasingly closer to gender justice for cases of OGBV. Our plan is to continue advocating for effective application of the rules, but also the urgent need to promote awareness, empathy, collective care and support for those who report OGBV. Only this way will we identify the best routes to take care of ourselves and to take care of those who decide to carry out the arduous but courageous process of reporting.

New reality, new 2021 Indela call

2020 was a challenging year. Latin America, like many parts of the world, is facing grave and unprecedented threats to rights and to democracy. The use of information technology, for example, has come to affect aspects of our lives like never before, and responses to the Covid-19 pandemic have exacerbated the deep inequalities that already existed in our societies — from the digital divide in the region, which affected access to knowledge and information, to new and complex security and privacy challenges. 

In response to the pandemic, the public sector has introduced social controls, and put restrictions on rights. Many Latin American countries have adopted technological measures to prevent and reduce the spread of the virus, for example, implementing contact tracing apps. These apps, which have not been shown to reduce contagion of the virus, violate the right to privacy by collecting unnecessary personal data, offer little transparency as to how that data is – and will be – used,  and present serious security concerns. They also increase both corporate and state power in the context of the current crisis, and our post-pandemic future.

In addition, during the first months of 2021, many governments in our region have taken a public position on the regulation of large digital platforms such as Facebook, Google, Twitter. These positions are concerning because they undermine freedom of expression online, and shape public discourse, without consideration of the public’s broader interests.  

While the future is uncertain, this significant and historic moment presents the opportunity to preserve and advance digital rights in Latin America. , Given the current emergency we are facing, Indela (Iniciativa por los Derechos Digitales en Latinoamérica) is launching a new, more flexible open call, intended to support projects that respond to the urgent needs of the digital rights ecosystem in the region.

In particular, Indela’s 2021 Open Call will support projects that protect and advance rights affected by digital technologies, and that are submitted by organizations based in Latin America. We will consider proposals for projects on public campaigns, applied research, and/or public policy advocacy, for funding up to a maximum of US$25,000, that can be implemented within a six-month period. In addition, each project will be eligible for specialized consultancies to strengthen the impact of the project as well as the overall work of the applicant organization. 

At Indela, we are reaffirming our commitment to strengthening digital rights in the region by supporting the organizations that defend them. We believe we must act now to address the specific and urgent challenges facing Latin Americans, by expanding and protecting our digital rights.

The 2021 call will be open from April 15 to May 15.

APPLY HERE

Uruguay: towards a population under surveillance with facial recognition

By DATYSOC

Uruguay is poised to develop a “Facial Identification” database for public safety purposes under the Ministry of the Interior. This system was approved using the National Budget Act as an “omnibus law,” thus preventing proper discussion about the issue due to the tight deadlines for approval of this type of law.

Development of this database will be under the responsibility of the Ministry of the Interior, using the database currently under the control of the National Directorate of Civil Identification, the organization in charge of issuing identification cards. The database will include facial images of adults, first and last names, sex, date of birth, nationality, and identification card number, as well as issue and expiration dates. The Ministry of the Interior has already purchased automated facial recognition software and currently has a system of 8433 cameras distributed in the country’s 19 departments, in addition to private surveillance systems. The national government has admitted that the intended use of this facial identification database is automated surveillance using facial recognition algorithms.

Particularly concerning is the broad discretion given to the Ministry of the Interior as to the possible uses of this facial identification database, since it includes any type of use for public safety purposes covered under the missions of the Organic Police Law. The concept of “public safety” is so broad that it does not define public authorities’ limitations in use of personal data.

What could go wrong?

Several recent studies [1], [2], [3] warn that most commercial facial recognition systems have significant bias and are still immature technology. Biased facial recognition technology is particularly problematic for uses related to public safety because errors could lead to false accusations and unjustified arrests.

But let’s suppose that the facial recognition algorithms work correctly, and the database is managed carefully from a technical point of view by the Ministry of the Interior. That would mean state surveillance systems could identify each individual perfectly. In that case, the question is: do we really want to go there?

The use of this technology entails great risks: it may be used to find and arrest protesters or protest organizers, or it may be used to track people remotely without their knowledge, among other concerning uses. In addition, living in a surveillance society affects people’s privacy and can also affect freedom of expression, movement and assembly, in ways we do not yet suspect. How will facial identification affect the behavior of Uruguayans? Has the impact of the possible social consequences of using biometrics in the public space been analyzed? Is it necessary and proportional?

In Uruguay the topic was included in a Budget Act (omnibus law) with no public discussion, but in other countries legislators have proposed or even approved laws prohibiting the use of facial recognition by the government to surveil its citizens [1] [2] [3] [4] [5] [6], including prohibiting the use of other biometrics technologies such as voice recognition, gait recognition and recognition of other immutable physical characteristics. Several organizations working on human rights and technology issues in Latin America have highlighted problematic cases related to use of facial recognition and have warned about the risks that this technology represents for the population.

In addition, it is important to emphasize that international human rights organizations have warned about the potential risks of abuse and recommend that countries regulate it by law, analyzing the detailed scope of its use, the need and proportionality. In this sense, in the year 2020, renowned tech companies decided to place moratoriums on offering their facial recognition solutions to governments, requesting that their use be regulated by parliamentary procedure.

Warning of civil society

On October 13, 2020, these two articles were approved by the Chamber of Representatives without debate of any kind. When the Budget Act passed to consideration by the Chamber of Senators (during the months of October and November), the DATYSOC team warned about the potential dangers of using automated facial identification for public safety purposes and about the excessive discretion granted to the Ministry of the Interior, managing to establish the issue on the media’s agenda [1] [2] [3] [4] [5] [6] [7]. In turn, along with over 20 organizations from Uruguay and the region, we sent a letter to the Chamber of Senators of Uruguay, asking that these articles be removed from the draft of the Budget Act.

Based on these warnings issued by civil society, several senators took a position in favor of separation of these articles or requested inclusion of a requirement for prior court order to authorize use of these facial identification data for public safety purposes. Unfortunately, due to the little time for discussion that a Budget Act implies, no agreement was reached. These articles, which included a “carte blanche” for the Ministry of the Interior, were approved without amendments by both chambers, preventing the in-depth and necessary parliamentary debate the issue requires.

Our strategy

Having exhausted the options to influence the parliamentary discussion process surrounding the Budget Act and seeking possible pathways to avoid greater harm, we at DATYSOC have decided to press for the inclusion of this issue in the 5th Open Government Action Plan 2021-2025. We are seeking a commitment from the Ministry of the Interior that allows, at minimum, the possibility of an informed debate with the participation of the many interested parties prior to its implementation.

Furthermore, we are closely analyzing the impact of these measures on human rights, seeking the greatest possible transparency in the process and its implementation, to keep the population informed.

Learn more about the issue

Indela is proud to support six new digital rights initiatives in the region

Selected Projects 2020

Digital and physical spaces are increasingly connected. Political and social tensions, the public’s relationship with the state and its use of technology are raising new and complex challenges to digital rights. The ongoing pandemic, and related state responses, are creating further cause for concern: Throughout the region, we are seeing widespread misuse of personal data, limits on expression, lack of information and knowledge being distributed to vulnerable communities, and many other alarming developments.

To support the advancement of digital rights in the region, Indela opened its second open call in 2020. We received 138 proposals from 15 Latin American countries.

Today, the Indela team is pleased to announce the six projects selected for its second funding cycle. We are very proud to support these innovative initiatives, which will work on free and fair copyright reform, reducing online gender based violence, localizing public data protection policies,  and user-centric cybersecurity laws, among others.

These six projects will receive funding for 12 to 18 months, as well as customized support to strengthen the impact of their work.

The final selections from Indela’s 2020 open call, are as follow: 

  1. REMIX: discussing copyright and the Internet,” by Agência Lema and InternetLab, will foster a public conversation about copyright in Brazil, and the need for progressive reforms.
  2. Supporting Victims of Online Gendered Violence” by the Cultivando Género Civil Association, will support women and girls in Aguascalientes, Mexico, who have been targeted by digital violence, to learn about the legal options available to them, and make informed decisions in exercising their rights.
  3. DATYSOC: towards a Comprehensive Digital Rights Agenda in Uruguay,” by DATA Uruguay, will strengthen Uruguay’s digital rights legislative digital rights agenda by advocating for public interest copyright regulation and internet intermediary liability policies.
  4. Multicultural digital rights frameworks for indigenous and afro-descendent communities in Bolivia: comparative analysis and public policy advocacy,” by Asociación Aguayo and Fundación InternetBolivia.org, will work to develop contextualized regulatory frameworks for internet access and personal data protection in selected Bolivian municipalities.
  5. A Multi-sector Initiative for Information Security and Fundamental Rights,” by  Vía Libre Foundation, is a collaboration between public- and private- sector actors to develop policies that safeguard digital assets (including personal data and critical infrastructure) in Argentina. 
  6. Building bridges between Latin America’s digital rights and consumer defense communities,” ​​by the Brazilian Institute for Consumer Defense (IDEC), will coordinate the digital rights work of consumer defense advocates, with the strategies of the region’s digital rights community. In particular the project will focus on personal data protection policies and their enforcement. 

Congratulations to the organizations selected in Indela’s 2020 Open Call!

For more information about Indela and the projects we support, follow us on Facebook and Twitter.

Indirect tactics

Ramiro Alvarez Ugarte

The privacy movement has always waged battle in difficult terrain, marked by the objective willingness of citizens to cede their data in exchange for benefits that they perceive as useful. The COVID-19 pandemic poses a new challenge that must be viewed as part of this long history in order to be effectively understood, and I believe that the challenge has never been so great.

Let me begin by presenting the adversary in the best possible light. A massively adopted surveillance technology already exists in society, on top of which an additional layer is being proposed for deployment in exchange for two concrete benefits: slowing the spread of the virus and lifting quarantine measures more efficiently. The promised benefits are significant and should not be quickly cast aside. Fewer deaths and less sustained economic damage appear as desirable objectives in the context of a rather frightening situation that has literally all of us locked up in our homes.

The new layer of surveillance on offer could take different forms. In China, it has manifested as a digital passport accessible via a popular e-wallet system that classifies people according to nebulous criteria. For the West, large companies are promising solutions that are more respectful of citizens’ privacy. Meanwhile, several  governments  have looked for some kind of technological solution to the problem. All these scenarios have their disadvantages: the ones that are specific to Latin America have already been pointed out by local organizations. In this brief space, I would like to draw attention to a structural dimension that I believe should guide us in reacting to this challenge. To do so, I’ll employ the old concept of layers, from the “internet world.”

Indeed, I believe that the best way to approach the newly proposed surveillance is to understand it as “one more layer” that would be deployed over a series of underlying layers. The image is useful because the underlying layers largely define how the top layer operates; they determine what the top one can achieve and what it asks of us in return. That is to say, the new layer of surveillance is deployed on top of an existing infrastructure, on top of specific socioeconomic situations, and, crucially, on top of the operating patterns of democratic institutions, both in terms of decision-making and mechanisms for accountability.

It is this layer of “democratic governance” that I want to focus on, seeing as I sense that the new layer of surveillance being offered to us is more or less inevitable, in part because it already exists: global users of Google Maps already allow access to their location data in return for a richer user experience. Who wouldn’t allow access to their location data in exchange for their neighbors’ survival? This focus asks us, therefore, to make a tactical pivot in order to reduce the damage or influence the decision-making process that would lead (or not lead) to a new layer involving sensitive data and the manifest intention to make it “shareable.” By doing so, we enable other people and/or health authorities to more closely monitor the population during times of quarantine.

The first step in this direction requires assuming that surveillance technologies indeed operate in one way or another in accordance with the function of the underlying layers. Thus, for example, an invasive system of mass surveillance of communications may be questionable in and of itself, but its consequences will simply be different according to whether it’s implemented by a dictatorship or by a democracy. Similarly, a CCTV system can have different consequences if the authority in charge of it is, for instance, accountable on a regular basis to a legislative oversight committee, versus if it has absolute discretion to expand, use, and enhance that system.

The second step is, then, to analyze and study the aspects of democratic governance that will determine the function of the new layer of surveillance and take direct actions specific to them.

•       The personal data infrastructure. What kind of protection exists in our country? Is it effective? Is the current legislation up to date? How do the bodies responsible for protection or enforcement work? In many countries, the answers to these questions are disappointing, but there is also special protection for “sensitive” data. Another issue that needs to be examined is related to existing mechanisms for accountability. Could the judiciary branch, for example, exercise effective control over contagion tracking systems? What about the legislative branch?

•       The reach of the exception. It is fundamental that we assess the extent to which our institutions are capable of creating policies of “exception” and respecting that exceptional character. In my country, for example, rules for exceptional situations have often been created and then normalized. Can we prevent that from happening? On the other hand, it is also important to pay attention to the various instruments that can be used to define the limits of the state of emergency (time limits/expirations, objective criteria such as numbers of cases, etc.). Therefore, it is essential to evaluate the current constitutional mechanisms that allow the state of emergency to be legally “produced” and the controls that regulate these mechanisms.

•       Science. Another possible focus is to partner with epidemiologists, who know that an app could never be the sole tool used to manage a pandemic, a complex equation of which contact tracing is only a small part. For example, everything indicates that without a substantial increase in a society’s testing capacity, a tracking application could yield erroneous results (false positives and negatives). This is a typical tactical approach: focus on the underlying vulnerability of a proposal that is simpler and cheaper, and which surfaces as an alternative intervention when a more complex and effective proposal is too costly.

•       Constitutionality. There is no reason why, upon being confronted with the deployment of technological tools, the least possible invasion of privacy cannot be demanded. In this case, the constitutional arguments that demand there should be limitations on measures that restrict rights, without sacrificing their capacity to meet their objective, should be helpful. The analysis of these limitations should take into account the amount of information collected, the guarantees established, the voluntary nature of monitoring programs, etc. Local constitutional law should be the basis of these indirect strategies, even more so, I believe, than international human rights standards.

•       Transparency. All actions to be taken by governments must be transparent; they should also be designed and implemented for the benefit of citizens, with sufficient political, legal, and social controls. It is likewise essential to consider the ways in which an additional layer of control can be set up, quickly and at no additional cost, and placed on top of the new layer of surveillance, e.g. via workplaces or traffic on public roads, where public authorities or private entities oversee the right of entry. Thus, if a digital passport system is established, what prevents a supermarket from requiring the credential to be displayed before entering? This possibility should lead us to a second and vital issue: the multiple discriminatory impacts that such technologies could have.

Those who have been working on digital rights in Latin America for years will see that what has been said up to this point looks like a hasty repackaging of old challenges. But this is because these challenges are persistent: despite advocacy efforts, increased public awareness, legal changes, and occasional scandals, invasive technologies seem to be advancing at a more rapid pace. COVID-19 presents an even more difficult scenario in which existing models intrude even deeper in exchange for concrete benefits. Never was the offer of those asking us to exchange our privacy for some possible benefit so tempting. It would be unacceptable political innocence not to see the issue from the point of view of the political leaders currently in office, who face an unexpected situation with an impact that, while still uncertain, will be massive. And it would be inept of us not to adjust tactics (and strategies) to rise to the new challenge.

Follow the discussion on our social media channels: Facebook and Twitter.


Ramiro Alvarez Ugarte is an associate professor of constitutional law at Universidad de Buenos Aires and a professor of law and social change at Universidad de Palermo (Buenos Aires). He is currently pursuing a JSD at Columbia Law School. Previously, he worked as a human rights lawyer at the Inter-American Commission on Human Rights (2009-2011) and at Asociación por los Derechos Civiles in Argentina (2011-2014), where he developed its privacy agenda. He holds an LLM from Columbia Law School (2009), where he was both a Harlan Fiske Stone Scholar and Fulbright Visiting Scholar.

Between technology and the pandemic

Paulina Gutiérrez

It is unquestionable that governments need information to respond to the pandemic, particularly to design evidence-based measures to control the contagion and save lives. Likewise, it is undeniable that the use of technology to collect that information has major implications on individuals’ human rights. Any argument otherwise would overlook the requirement for governments to demonstrate that limitations imposed on our rights are legal, necessary and proportionate, especially restrictions on free movement, privacy, protection of personal data and freedom of expression.

But what does that requirement mean and what does it involve? And, more importantly, how do we make sure that during and after the pandemic the restrictions on our rights are limited only to the purpose for which they were adopted? How do we protect ourselves against any abuse and violation of our human rights?

The deployment of technology-based initiatives to address public problems is not new and has been subject of controversy in the past. In the context of the pandemic, the difference lies in the invisibility and lack of knowledge about a virus whose social, economic and health impact demands immediate and focused attention. Although there is a general understanding that technological means must go hand-in-hand with a variety of non-technological measures, the need to make the virus visible and preventable has made technology seem as the only means to control it.

Some of the technological responses range from “chatbots,” informative mobile apps that provide diagnoses based on user-identified symptoms, body temperature readers and symptom monitoring, to mechanisms that control and track movement, contact and isolation through mobile phone services, georeferencing and the use of drones.

Each of the technological measures and tools designed to control and prevent the spread of the virus benefit from different levels of invasion of privacy. They record access, activities, interactions, symptoms and illnesses, whose technological and digital processing requires access of personal information by third parties – personal information generated through the collection, storage, transmission, use, study and management of data. Hence, presumably, it is not difficult to imagine a situation in which the solution to the pandemic’s impact would be the product of information we provide, waiving our privacy and data protection. However, the formula is much more complex, especially if we start by acknowledging that the restrictions are not confined only to the State accessing our information; we must also consider the active participation of the private sector. We are then facing privacy invasions in the name of public health. Although a health emergency can be invoked to impose restrictions on human rights, it will never be a legitimate justification when applied in isolation. In other words, combating the health crisis by accessing and using our individual, personal and collective information imposes an inadmissible surveillance system, unless the measures adopted comply with States’ mandatory observance of a set of conventional and legal protections, specifically those that safeguard the immunity afforded to privacy against arbitrary or abusive invasion by public authorities or non-public entities.

This means that States must strictly comply with the legitimacy test when adopting surveillance practices, especially those with the purpose of tracking and containing the spread of a virus. Any government that decides to opt for technological responses based on information generated by the activities of individuals must ensure that (a) the legally stipulated restriction is clear and precise and that there is a legal framework for protection of the affected rights, as well as effective legal remedies against abuse and violations of rights restricted by public and private parties – legality; (b) the restrictive measure is suitable and effective to achieve the intended purposes, demonstrating that it is the only available method – necessity; and (c) there are no other available measures and methods that would be less detrimental to the right to privacy and other rights, and that there are strict limits on the duration of the invasive measure – proportionality.

In Latin America, the legitimacy test for limitations over the right to privacy has never been more relevant, primarily due to the existing technological capacity prior to the pandemic and its abuse through targeted and mass surveillance practices, extensively documented in the last six years.
Argentina, Bolivia, ChileColombiaEcuadorGuatemalaMexicoParaguay, and Peru are some of the countries in which governments have decided to adopt surveillance technology measures and collection methods to respond to the health crisis. There is still no evidence supporting the effectiveness of these measures, hence, public scrutiny alongside legal and judicial controls are critical to contain arbitrariness and abuses from governments and non-public entities.

Existing data protection frameworks in the region seem to be insufficient for the challenges posed by the pandemic. However, if we regard them as improvable, they will be invaluable for imposing controls over both governments and private entities. Aware of the fact that their development and implementation in Latin America is asymmetrical and inconsistent – and in some cases non-existent, special regulations may provide legal avenues to demand justification of the limits imposed on individuals’ power to know what information third parties have about them, to limit the duration and purpose of processing their information, and to decide whether to provide their most sensitive data – that is information that may reveal information about their health, race, sexual preference, among others.

Although these legal frameworks include exceptions on the basis of public health, the principles of temporality, necessity and legality remain applicable. Above all, they are essential for knowing the destination, purpose and use of the information collected and processed, as well as preventing practices from becoming permanent once the pandemic is under major control.

Therefore, we must be clearly informed on whether the use of technological tools and measures contributes to effectively contain the virus, whether these tools request or access more information than they claim to need to control the spread of the virus, and what legal framework or remedy we can resort to if our data is shared with government bodies or non-public entities not involved in containing the pandemic, just to name a few practical scenarios.

In Latin America and in other regions of the world, the legitimacy test and data protection legal frameworks have already prevented the invasive, unnecessary and disproportionate use of technology to control the health crisis:

  • In Brazil, Provisional Measure (Medida Provisória – MP) 954/2020 was suspended by the Federal Supreme Court on May 7, 2020, to prevent irreparable harm to the privacy of individuals. Measure 954/2020 was issued by the Executive Branch to order telecommunications companies to provide information from mobile users to the Brazilian Institute of Geography and Statistics (Instituto Brasileño de Geografía y Estadística – IBGE), which would prepare statistics on the pandemic. Under necessity, suitability and proportionality analysis, a judge from the Supreme Court ruled that, without underestimating the severity of the health crisis and the need to develop public policies based on certain data, the constitutional rights of individuals must not be violated and, therefore, it was necessary to suspend the measure as it did not provide nor existed suitable mechanisms to protect individuals’ data against unauthorized access or undue use of their information.
  • In Chile, a short-term or special temporary law was proposed to safeguard the data of individuals whose state of health is exposed and subject to processing by a variety of parties during the pandemic. The Transparency Council proposed this initiative in April, recognizing that health related data are not only constitutionally protected but also have a special protection due to the information they reveal. Something similar happened in the United Kingdom, where the Parliamentary Joint Committee on Human Rights proposed special legislation focused on precisely regulating the purpose and limits for obtaining and processing information collected through a contact monitoring app, requiring the government to delete the information collected after the end of the health crisis, and imposing measures against abuses by the government and third parties.
  • In India, the Kerala High Court admitted three petitions against mandatory use of a contact monitoring app and the imposition of criminal sanctions for not using it. In addition, on April 24, 2020, the Kerala High Court issued an order instructing the to safeguard the confidentiality of data of patients at risk of coronavirus, collected by a digital system operated by the government of Kerala and Sprinklr Inc. It also prohibited this company from committing any act that compromises the confidentiality of the data.
  • In Slovakia, the Constitutional Court suspended the special legislation that allowed authorities to access user data collected by telecommunications companies to monitor individuals infected with coronavirus. On May 13, 2020, the Court determined that the legislation was ambiguous and the purposes of the processing were not sufficiently clear; the legislation would allow processing of personal data without clear intentions and lacked the necessary safeguards against abuse of the information collected and processed.

The challenge is not simple but the obligations of governments are very clear: our rights to privacy and protection of personal data remain applicable during the health crisis. The burden to demonstrate that our rights may be subject to limitation must not be driven by technology-centered approaches, whose benefits enable the exercise of our human rights. Rather, States must observe their human rights obligations in terms of transparency and accountability, as well as its duty to prove the legitimacy of the measures; allowing their abuse make them arbitrary.

Latin American digital rights in 2020: a year of new opportunities and challenges   

Bookending 2019 were two major milestones for digital rights and civil society in Latin America. At the start of the year, we saw the #MeTooMX movement ignite in México, amplifying the voices of sexual violence survivors nationwide. Later in the year, during the months of October and December, we saw dozens of protests erupt in response to deep-seated, longstanding socio-economic and political issues take place across Latin America, but especially in the Andean region. Countries such as Bolivia, Colombia, Chile, Ecuador and Peru became the focal point for diverse, vibrant, decentralized movements that resulted in several state leaders leaving power.

While the full impact of 2019’s social justice uprisings is still being understood, we can already be certain that there is a fundamental relationship between digital rights and civic participation. In other words, digital and physical spaces are increasingly interconnected. For example, the MeTooMx movement demonstrated the transformative power that citizen participation can have online, but it also illustrated the threats it can pose. While the movement brought much needed attention to gender equality, at the same time it led to increased attacks and harassment of women both online and off. Similarly, the protests throughout the Andes, undoubtedly made important progress for social, political and economic rights, yet at the same time they were met by online censorship and internet shutdowns, and an increase in intrusive state surveillance without adequate justification or oversight. As several organizations from the region expressed in a public statement from December 20 th 2019, there is wide concern about the “global trend of persecuting people who defend human rights using digital media and platforms, including those who conduct research and provide safety training to protect and promote these rights.”

Unfortunately, not just in the Andes, but throughout Latin America, social justice protests are increasingly met by disproportionate state surveillance. Amidst the region’s crises of legitimacy and weakening institutions, many governments are now spying on their own citizens at unprecedented levels. For example, we are seeing the broad use of facial recognition and other biometric surveillance technologies, the use of targeted spyware deployed against activists, and disproportionate access to personal records – all of which are contrary to international human rights standards. As a result, we are seeing now more than ever that the connection between political and social tensions and the State’s use of information technology are central to civil society’s agenda. Just as it has never been so important to understand the digital environment to understand what’s going on in the streets, it has never been so important to understand what’s going on in the streets, to understand the digital environment.

The state’s use of information technology to consolidate its power, could not, of course, be possible without the private sector, which is, in its own right, interested in amassing profit. This convergence of commercial and political interests has led digital rights organizations to develop sophisticated and comprehensive agendas centered around the principle of social justice and civic empowerment. To this end, we see new campaigns emerging on net neutrality, content moderation, use of AI by intelligence agencies, electronic voting, and technical control over copyright issues.

2019 has shown us that given rapid ongoing developments in information technology, and ever- evolving socio-political situations throughout Latin America, it would be naïve to say there is a fixed digital rights agenda for the year ahead. But to ensure robust responses to whatever threats do emerge online in 2020, Indela is launching its second open call to support the organizations fighting to protect digital rights in the region. Our aim at Indela is to supports organizations to build capacity and resilience, so that they can rise to meet the demands of protecting empowered, informed, participatory – and connected – spaces online. To talk about digital rights today is, now more than ever, to talk about human rights.

To learn more about Indela, visit our website and follow us on Twitter.

Interview with Carlos Cortés: You can not talk about rights in the physical space without digital rights

Carlos Cortés is the founder of Linterna Verde, a non-profit think-tank and a consultant on internet and society issues. Carlos was Twitter’s Public Policy manager for Latin America. He has advised international cooperation organizations on freedom of expression and internet policy. He has a law degree from Los Andes University (Colombia) and a Masters in Communications and Media Governance from the London School of Economics. He is a researcher on Internet policy issues at the Center for Freedom of Expression of Palermo University, Argentina. He currently directs the video blog La Mesa de Centro.

Why are digital rights important?

When we talk about digital rights we are not referring to an isolated and limited exercise in the online environment. Nowadays, the exercise of most of the rights in physical spaces -or analogical- depends and feeds on the possibilities of development in the online context.

In other words: without digital rights there are no analogical rights. Think, for example, of the right to protest, freedom of expression, privacy, or political participation. Without digital guarantees, we can hardly talk about the existence of active citizens.

From your perspective, what are the main challenges facing the digital rights ecosystem in Latin America?

There are as many challenges as there are issues, but if I had to place it in the regional ecosystem, the most important challenge arises from the tension between the role expected of the State and the distrust of the State. For example: we are concerned about the accumulation of data by private intermediaries. Should we then give surveillance tools to governments that have also abused their powers of inspection and control?

In the same way, we face the question of which problems we must resolve through the regulatory channel and which should be channeled through private or self-regulatory solutions.

Why is the digital rights context in Latin America unique?

Unlike other regions, and as is usual in our part of the world, Latin America tries to build all the floors of the house simultaneously -and often we start one without finishing the other.

We face questions about digital rights when we still have enormous challenges in terms of infrastructure, connectivity and digital literacy. Think, for example, of network neutrality. When we were still trying to guarantee this principle of public policy in fixed-line connections, the mobile Internet began to develop—and with it the consequent problem of ‘zero rating’. Our context is unique because we coexist and promote changes amid deep contrasts.

To learn more about Indela, visit our website and follow us on Facebook and Twitter.

Announcing the projects selected for Indela’s first Open Call

The Initiative for Digital Rights in Latin America (Indela) gladly announces the eight projects that have been selected in our first Open Call. Indela is a partnership strategically directed by Fundación Avina, Luminate and Open Society Foundations and supported by the Ford Foundation and the International Development Research Centre (IDRC).

We are very proud to support these innovative projects that will foster and protect digital rights in Latin America—such as freedom of expression, privacy, and access to knowledge— through public campaigns, impact work, applied research, and litigation.

Eight projects were selected out of 163 proposals from 20 Latin American countries. These projects will receive funding for 12 to 18 months (with possibility of extension), as well as specialized support to strengthen capacities associated with their projects.

Here are the selected projects and organizations:

  • “Defending freedom of expression on the internet: online transparency and due process in view of censorship by content removal” by Article 19 Mexico and Central America is focused on Mexico, Central America, and the Caribbean Islands. Its purpose is to stop the State from removing content as a censorship practice and to have intermediaries link their policies and practices to human rights standards.
  • “Electoral transparency: technology, safety, and regulation for speech in the electoral process” by the Karisma Foundation will focus on political parties, civil society and the media to make electoral processes in Colombia more transparent and safe. The project is based on an approach that acknowledges the importance of human rights and the responsible use of technology.
  • “Filling the gap in digital rights for vulnerable populations in Peru” by Hiperderecho will identify and create collaboration strategies to help reduce online gender violence.
  • “Incorporating safeguards, due process, and human rights standards into the use of biometric technologies for mass surveillance in Brazil” by InternetLab aims to discuss and qualify public safety policies by involving the legal community in the debate regarding due process and human rights standards in mass surveillance practices with biometric technologies in Brazil.
  • “Building capacities based on a multi-stakeholder perspective in the internet ecosystem in Central America” by IPANDETEC will foster conversations about the intersection of human rights and technology in order to improve public policies and legislation related to privacy and the freedom of expression.
  • “Access to justice for women surviving digital violence in Mexico”. With this project, Luchadoras will create tools related to fight online violence in to order to support women, policy makers, authorities, and decision makers.
  • “Strategic litigation for digital rights in Latin America”, coordinated by R3D: Red en Defensa de los Derechos Digitales and the Center for Studies on Freedom of Expression (CELE), aims to generate favorable precedents for defending digital rights in Latin America through litigation in national courts and international bodies.
  • “Strengthening protection of Personal Data in Paraguay among civil society: a multifaceted strategy” by TEDIC will incubate strategic litigation cases on the national and regional levels. The project will involve universities, capacity building, and ultimately foster public policies to support a strong digital rights system in Paraguay.

To learn more about Indela, our first Open Call, and the selected projects, visit our website, or follow us on Facebook and Twitter.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from Youtube
Vimeo
Consent to display content from Vimeo
Google Maps
Consent to display content from Google