Uruguay is poised to develop a “Facial Identification” database for public safety purposes under the Ministry of the Interior. This system was approved using the National Budget Act as an “omnibus law,” thus preventing proper discussion about the issue due to the tight deadlines for approval of this type of law.
Development of this database will be under the responsibility of the Ministry of the Interior, using the database currently under the control of the National Directorate of Civil Identification, the organization in charge of issuing identification cards. The database will include facial images of adults, first and last names, sex, date of birth, nationality, and identification card number, as well as issue and expiration dates. The Ministry of the Interior has already purchased automated facial recognition software and currently has a system of 8433 cameras distributed in the country’s 19 departments, in addition to private surveillance systems. The national government has admitted that the intended use of this facial identification database is automated surveillance using facial recognition algorithms.
Particularly concerning is the broad discretion given to the Ministry of the Interior as to the possible uses of this facial identification database, since it includes any type of use for public safety purposes covered under the missions of the Organic Police Law. The concept of “public safety” is so broad that it does not define public authorities’ limitations in use of personal data.
What could go wrong?
Several recent studies , ,  warn that most commercial facial recognition systems have significant bias and are still immature technology. Biased facial recognition technology is particularly problematic for uses related to public safety because errors could lead to false accusations and unjustified arrests.
But let’s suppose that the facial recognition algorithms work correctly, and the database is managed carefully from a technical point of view by the Ministry of the Interior. That would mean state surveillance systems could identify each individual perfectly. In that case, the question is: do we really want to go there?
The use of this technology entails great risks: it may be used to find and arrest protesters or protest organizers, or it may be used to track people remotely without their knowledge, among other concerning uses. In addition, living in a surveillance society affects people’s privacy and can also affect freedom of expression, movement and assembly, in ways we do not yet suspect. How will facial identification affect the behavior of Uruguayans? Has the impact of the possible social consequences of using biometrics in the public space been analyzed? Is it necessary and proportional?
In Uruguay the topic was included in a Budget Act (omnibus law) with no public discussion, but in other countries legislators have proposed or even approved laws prohibiting the use of facial recognition by the government to surveil its citizens      , including prohibiting the use of other biometrics technologies such as voice recognition, gait recognition and recognition of other immutable physical characteristics. Several organizations working on human rights and technology issues in Latin America have highlighted problematic cases related to use of facial recognition and have warned about the risks that this technology represents for the population.
In addition, it is important to emphasize that international human rights organizations have warned about the potential risks of abuse and recommend that countries regulate it by law, analyzing the detailed scope of its use, the need and proportionality. In this sense, in the year 2020, renowned tech companies decided to place moratoriums on offering their facial recognition solutions to governments, requesting that their use be regulated by parliamentary procedure.
Warning of civil society
On October 13, 2020, these two articles were approved by the Chamber of Representatives without debate of any kind. When the Budget Act passed to consideration by the Chamber of Senators (during the months of October and November), the DATYSOC team warned about the potential dangers of using automated facial identification for public safety purposes and about the excessive discretion granted to the Ministry of the Interior, managing to establish the issue on the media’s agenda       . In turn, along with over 20 organizations from Uruguay and the region, we sent a letter to the Chamber of Senators of Uruguay, asking that these articles be removed from the draft of the Budget Act.
Based on these warnings issued by civil society, several senators took a position in favor of separation of these articles or requested inclusion of a requirement for prior court order to authorize use of these facial identification data for public safety purposes. Unfortunately, due to the little time for discussion that a Budget Act implies, no agreement was reached. These articles, which included a “carte blanche” for the Ministry of the Interior, were approved without amendments by both chambers, preventing the in-depth and necessary parliamentary debate the issue requires.
Having exhausted the options to influence the parliamentary discussion process surrounding the Budget Act and seeking possible pathways to avoid greater harm, we at DATYSOC have decided to press for the inclusion of this issue in the 5th Open Government Action Plan 2021-2025. We are seeking a commitment from the Ministry of the Interior that allows, at minimum, the possibility of an informed debate with the participation of the many interested parties prior to its implementation.
Furthermore, we are closely analyzing the impact of these measures on human rights, seeking the greatest possible transparency in the process and its implementation, to keep the population informed.
Learn more about the issue
Selected Projects 2020
Digital and physical spaces are increasingly connected. Political and social tensions, the public’s relationship with the state and its use of technology are raising new and complex challenges to digital rights. The ongoing pandemic, and related state responses, are creating further cause for concern: Throughout the region, we are seeing widespread misuse of personal data, limits on expression, lack of information and knowledge being distributed to vulnerable communities, and many other alarming developments.
To support the advancement of digital rights in the region, Indela opened its second open call in 2020. We received 138 proposals from 15 Latin American countries.
Today, the Indela team is pleased to announce the six projects selected for its second funding cycle. We are very proud to support these innovative initiatives, which will work on free and fair copyright reform, reducing online gender based violence, localizing public data protection policies, and user-centric cybersecurity laws, among others.
These six projects will receive funding for 12 to 18 months, as well as customized support to strengthen the impact of their work.
The final selections from Indela’s 2020 open call, are as follow:
- “REMIX: discussing copyright and the Internet,” by Agência Lema and InternetLab, will foster a public conversation about copyright in Brazil, and the need for progressive reforms.
- “Supporting Victims of Online Gendered Violence” by the Cultivando Género Civil Association, will support women and girls in Aguascalientes, Mexico, who have been targeted by digital violence, to learn about the legal options available to them, and make informed decisions in exercising their rights.
- “DATYSOC: towards a Comprehensive Digital Rights Agenda in Uruguay,” by DATA Uruguay, will strengthen Uruguay’s digital rights legislative digital rights agenda by advocating for public interest copyright regulation and internet intermediary liability policies.
- “Multicultural digital rights frameworks for indigenous and afro-descendent communities in Bolivia: comparative analysis and public policy advocacy,” by Asociación Aguayo and Fundación InternetBolivia.org, will work to develop contextualized regulatory frameworks for internet access and personal data protection in selected Bolivian municipalities.
- “A Multi-sector Initiative for Information Security and Fundamental Rights,” by Vía Libre Foundation, is a collaboration between public- and private- sector actors to develop policies that safeguard digital assets (including personal data and critical infrastructure) in Argentina.
- “Building bridges between Latin America’s digital rights and consumer defense communities,” by the Brazilian Institute for Consumer Defense (IDEC), will coordinate the digital rights work of consumer defense advocates, with the strategies of the region’s digital rights community. In particular the project will focus on personal data protection policies and their enforcement.
Congratulations to the organizations selected in Indela’s 2020 Open Call!
Ramiro Alvarez Ugarte
The privacy movement has always waged battle in difficult terrain, marked by the objective willingness of citizens to cede their data in exchange for benefits that they perceive as useful. The COVID-19 pandemic poses a new challenge that must be viewed as part of this long history in order to be effectively understood, and I believe that the challenge has never been so great.
Let me begin by presenting the adversary in the best possible light. A massively adopted surveillance technology already exists in society, on top of which an additional layer is being proposed for deployment in exchange for two concrete benefits: slowing the spread of the virus and lifting quarantine measures more efficiently. The promised benefits are significant and should not be quickly cast aside. Fewer deaths and less sustained economic damage appear as desirable objectives in the context of a rather frightening situation that has literally all of us locked up in our homes.
The new layer of surveillance on offer could take different forms. In China, it has manifested as a digital passport accessible via a popular e-wallet system that classifies people according to nebulous criteria. For the West, large companies are promising solutions that are more respectful of citizens’ privacy. Meanwhile, several governments have looked for some kind of technological solution to the problem. All these scenarios have their disadvantages: the ones that are specific to Latin America have already been pointed out by local organizations. In this brief space, I would like to draw attention to a structural dimension that I believe should guide us in reacting to this challenge. To do so, I’ll employ the old concept of layers, from the “internet world.”
Indeed, I believe that the best way to approach the newly proposed surveillance is to understand it as “one more layer” that would be deployed over a series of underlying layers. The image is useful because the underlying layers largely define how the top layer operates; they determine what the top one can achieve and what it asks of us in return. That is to say, the new layer of surveillance is deployed on top of an existing infrastructure, on top of specific socioeconomic situations, and, crucially, on top of the operating patterns of democratic institutions, both in terms of decision-making and mechanisms for accountability.
It is this layer of “democratic governance” that I want to focus on, seeing as I sense that the new layer of surveillance being offered to us is more or less inevitable, in part because it already exists: global users of Google Maps already allow access to their location data in return for a richer user experience. Who wouldn’t allow access to their location data in exchange for their neighbors’ survival? This focus asks us, therefore, to make a tactical pivot in order to reduce the damage or influence the decision-making process that would lead (or not lead) to a new layer involving sensitive data and the manifest intention to make it “shareable.” By doing so, we enable other people and/or health authorities to more closely monitor the population during times of quarantine.
The first step in this direction requires assuming that surveillance technologies indeed operate in one way or another in accordance with the function of the underlying layers. Thus, for example, an invasive system of mass surveillance of communications may be questionable in and of itself, but its consequences will simply be different according to whether it’s implemented by a dictatorship or by a democracy. Similarly, a CCTV system can have different consequences if the authority in charge of it is, for instance, accountable on a regular basis to a legislative oversight committee, versus if it has absolute discretion to expand, use, and enhance that system.
The second step is, then, to analyze and study the aspects of democratic governance that will determine the function of the new layer of surveillance and take direct actions specific to them.
• The personal data infrastructure. What kind of protection exists in our country? Is it effective? Is the current legislation up to date? How do the bodies responsible for protection or enforcement work? In many countries, the answers to these questions are disappointing, but there is also special protection for “sensitive” data. Another issue that needs to be examined is related to existing mechanisms for accountability. Could the judiciary branch, for example, exercise effective control over contagion tracking systems? What about the legislative branch?
• The reach of the exception. It is fundamental that we assess the extent to which our institutions are capable of creating policies of “exception” and respecting that exceptional character. In my country, for example, rules for exceptional situations have often been created and then normalized. Can we prevent that from happening? On the other hand, it is also important to pay attention to the various instruments that can be used to define the limits of the state of emergency (time limits/expirations, objective criteria such as numbers of cases, etc.). Therefore, it is essential to evaluate the current constitutional mechanisms that allow the state of emergency to be legally “produced” and the controls that regulate these mechanisms.
• Science. Another possible focus is to partner with epidemiologists, who know that an app could never be the sole tool used to manage a pandemic, a complex equation of which contact tracing is only a small part. For example, everything indicates that without a substantial increase in a society’s testing capacity, a tracking application could yield erroneous results (false positives and negatives). This is a typical tactical approach: focus on the underlying vulnerability of a proposal that is simpler and cheaper, and which surfaces as an alternative intervention when a more complex and effective proposal is too costly.
• Constitutionality. There is no reason why, upon being confronted with the deployment of technological tools, the least possible invasion of privacy cannot be demanded. In this case, the constitutional arguments that demand there should be limitations on measures that restrict rights, without sacrificing their capacity to meet their objective, should be helpful. The analysis of these limitations should take into account the amount of information collected, the guarantees established, the voluntary nature of monitoring programs, etc. Local constitutional law should be the basis of these indirect strategies, even more so, I believe, than international human rights standards.
• Transparency. All actions to be taken by governments must be transparent; they should also be designed and implemented for the benefit of citizens, with sufficient political, legal, and social controls. It is likewise essential to consider the ways in which an additional layer of control can be set up, quickly and at no additional cost, and placed on top of the new layer of surveillance, e.g. via workplaces or traffic on public roads, where public authorities or private entities oversee the right of entry. Thus, if a digital passport system is established, what prevents a supermarket from requiring the credential to be displayed before entering? This possibility should lead us to a second and vital issue: the multiple discriminatory impacts that such technologies could have.
Those who have been working on digital rights in Latin America for years will see that what has been said up to this point looks like a hasty repackaging of old challenges. But this is because these challenges are persistent: despite advocacy efforts, increased public awareness, legal changes, and occasional scandals, invasive technologies seem to be advancing at a more rapid pace. COVID-19 presents an even more difficult scenario in which existing models intrude even deeper in exchange for concrete benefits. Never was the offer of those asking us to exchange our privacy for some possible benefit so tempting. It would be unacceptable political innocence not to see the issue from the point of view of the political leaders currently in office, who face an unexpected situation with an impact that, while still uncertain, will be massive. And it would be inept of us not to adjust tactics (and strategies) to rise to the new challenge.
Ramiro Alvarez Ugarte is an associate professor of constitutional law at Universidad de Buenos Aires and a professor of law and social change at Universidad de Palermo (Buenos Aires). He is currently pursuing a JSD at Columbia Law School. Previously, he worked as a human rights lawyer at the Inter-American Commission on Human Rights (2009-2011) and at Asociación por los Derechos Civiles in Argentina (2011-2014), where he developed its privacy agenda. He holds an LLM from Columbia Law School (2009), where he was both a Harlan Fiske Stone Scholar and Fulbright Visiting Scholar.
It is unquestionable that governments need information to respond to the pandemic, particularly to design evidence-based measures to control the contagion and save lives. Likewise, it is undeniable that the use of technology to collect that information has major implications on individuals’ human rights. Any argument otherwise would overlook the requirement for governments to demonstrate that limitations imposed on our rights are legal, necessary and proportionate, especially restrictions on free movement, privacy, protection of personal data and freedom of expression.
But what does that requirement mean and what does it involve? And, more importantly, how do we make sure that during and after the pandemic the restrictions on our rights are limited only to the purpose for which they were adopted? How do we protect ourselves against any abuse and violation of our human rights?
The deployment of technology-based initiatives to address public problems is not new and has been subject of controversy in the past. In the context of the pandemic, the difference lies in the invisibility and lack of knowledge about a virus whose social, economic and health impact demands immediate and focused attention. Although there is a general understanding that technological means must go hand-in-hand with a variety of non-technological measures, the need to make the virus visible and preventable has made technology seem as the only means to control it.
Some of the technological responses range from “chatbots,” informative mobile apps that provide diagnoses based on user-identified symptoms, body temperature readers and symptom monitoring, to mechanisms that control and track movement, contact and isolation through mobile phone services, georeferencing and the use of drones.
Each of the technological measures and tools designed to control and prevent the spread of the virus benefit from different levels of invasion of privacy. They record access, activities, interactions, symptoms and illnesses, whose technological and digital processing requires access of personal information by third parties – personal information generated through the collection, storage, transmission, use, study and management of data. Hence, presumably, it is not difficult to imagine a situation in which the solution to the pandemic’s impact would be the product of information we provide, waiving our privacy and data protection. However, the formula is much more complex, especially if we start by acknowledging that the restrictions are not confined only to the State accessing our information; we must also consider the active participation of the private sector. We are then facing privacy invasions in the name of public health. Although a health emergency can be invoked to impose restrictions on human rights, it will never be a legitimate justification when applied in isolation. In other words, combating the health crisis by accessing and using our individual, personal and collective information imposes an inadmissible surveillance system, unless the measures adopted comply with States’ mandatory observance of a set of conventional and legal protections, specifically those that safeguard the immunity afforded to privacy against arbitrary or abusive invasion by public authorities or non-public entities.
This means that States must strictly comply with the legitimacy test when adopting surveillance practices, especially those with the purpose of tracking and containing the spread of a virus. Any government that decides to opt for technological responses based on information generated by the activities of individuals must ensure that (a) the legally stipulated restriction is clear and precise and that there is a legal framework for protection of the affected rights, as well as effective legal remedies against abuse and violations of rights restricted by public and private parties – legality; (b) the restrictive measure is suitable and effective to achieve the intended purposes, demonstrating that it is the only available method – necessity; and (c) there are no other available measures and methods that would be less detrimental to the right to privacy and other rights, and that there are strict limits on the duration of the invasive measure – proportionality.
In Latin America, the legitimacy test for limitations over the right to privacy has never been more relevant, primarily due to the existing technological capacity prior to the pandemic and its abuse through targeted and mass surveillance practices, extensively documented in the last six years.
Argentina, Bolivia, Chile, Colombia, Ecuador, Guatemala, Mexico, Paraguay, and Peru are some of the countries in which governments have decided to adopt surveillance technology measures and collection methods to respond to the health crisis. There is still no evidence supporting the effectiveness of these measures, hence, public scrutiny alongside legal and judicial controls are critical to contain arbitrariness and abuses from governments and non-public entities.
Existing data protection frameworks in the region seem to be insufficient for the challenges posed by the pandemic. However, if we regard them as improvable, they will be invaluable for imposing controls over both governments and private entities. Aware of the fact that their development and implementation in Latin America is asymmetrical and inconsistent – and in some cases non-existent, special regulations may provide legal avenues to demand justification of the limits imposed on individuals’ power to know what information third parties have about them, to limit the duration and purpose of processing their information, and to decide whether to provide their most sensitive data – that is information that may reveal information about their health, race, sexual preference, among others.
Although these legal frameworks include exceptions on the basis of public health, the principles of temporality, necessity and legality remain applicable. Above all, they are essential for knowing the destination, purpose and use of the information collected and processed, as well as preventing practices from becoming permanent once the pandemic is under major control.
Therefore, we must be clearly informed on whether the use of technological tools and measures contributes to effectively contain the virus, whether these tools request or access more information than they claim to need to control the spread of the virus, and what legal framework or remedy we can resort to if our data is shared with government bodies or non-public entities not involved in containing the pandemic, just to name a few practical scenarios.
In Latin America and in other regions of the world, the legitimacy test and data protection legal frameworks have already prevented the invasive, unnecessary and disproportionate use of technology to control the health crisis:
- In Brazil, Provisional Measure (Medida Provisória – MP) 954/2020 was suspended by the Federal Supreme Court on May 7, 2020, to prevent irreparable harm to the privacy of individuals. Measure 954/2020 was issued by the Executive Branch to order telecommunications companies to provide information from mobile users to the Brazilian Institute of Geography and Statistics (Instituto Brasileño de Geografía y Estadística – IBGE), which would prepare statistics on the pandemic. Under necessity, suitability and proportionality analysis, a judge from the Supreme Court ruled that, without underestimating the severity of the health crisis and the need to develop public policies based on certain data, the constitutional rights of individuals must not be violated and, therefore, it was necessary to suspend the measure as it did not provide nor existed suitable mechanisms to protect individuals’ data against unauthorized access or undue use of their information.
- In Chile, a short-term or special temporary law was proposed to safeguard the data of individuals whose state of health is exposed and subject to processing by a variety of parties during the pandemic. The Transparency Council proposed this initiative in April, recognizing that health related data are not only constitutionally protected but also have a special protection due to the information they reveal. Something similar happened in the United Kingdom, where the Parliamentary Joint Committee on Human Rights proposed special legislation focused on precisely regulating the purpose and limits for obtaining and processing information collected through a contact monitoring app, requiring the government to delete the information collected after the end of the health crisis, and imposing measures against abuses by the government and third parties.
- In India, the Kerala High Court admitted three petitions against mandatory use of a contact monitoring app and the imposition of criminal sanctions for not using it. In addition, on April 24, 2020, the Kerala High Court issued an order instructing the to safeguard the confidentiality of data of patients at risk of coronavirus, collected by a digital system operated by the government of Kerala and Sprinklr Inc. It also prohibited this company from committing any act that compromises the confidentiality of the data.
- In Slovakia, the Constitutional Court suspended the special legislation that allowed authorities to access user data collected by telecommunications companies to monitor individuals infected with coronavirus. On May 13, 2020, the Court determined that the legislation was ambiguous and the purposes of the processing were not sufficiently clear; the legislation would allow processing of personal data without clear intentions and lacked the necessary safeguards against abuse of the information collected and processed.
The challenge is not simple but the obligations of governments are very clear: our rights to privacy and protection of personal data remain applicable during the health crisis. The burden to demonstrate that our rights may be subject to limitation must not be driven by technology-centered approaches, whose benefits enable the exercise of our human rights. Rather, States must observe their human rights obligations in terms of transparency and accountability, as well as its duty to prove the legitimacy of the measures; allowing their abuse make them arbitrary.
Bookending 2019 were two major milestones for digital rights and civil society in Latin America. At the start of the year, we saw the #MeTooMX movement ignite in México, amplifying the voices of sexual violence survivors nationwide. Later in the year, during the months of October and December, we saw dozens of protests erupt in response to deep-seated, longstanding socio-economic and political issues take place across Latin America, but especially in the Andean region. Countries such as Bolivia, Colombia, Chile, Ecuador and Peru became the focal point for diverse, vibrant, decentralized movements that resulted in several state leaders leaving power.
While the full impact of 2019’s social justice uprisings is still being understood, we can already be certain that there is a fundamental relationship between digital rights and civic participation. In other words, digital and physical spaces are increasingly interconnected. For example, the MeTooMx movement demonstrated the transformative power that citizen participation can have online, but it also illustrated the threats it can pose. While the movement brought much needed attention to gender equality, at the same time it led to increased attacks and harassment of women both online and off. Similarly, the protests throughout the Andes, undoubtedly made important progress for social, political and economic rights, yet at the same time they were met by online censorship and internet shutdowns, and an increase in intrusive state surveillance without adequate justification or oversight. As several organizations from the region expressed in a public statement from December 20 th 2019, there is wide concern about the “global trend of persecuting people who defend human rights using digital media and platforms, including those who conduct research and provide safety training to protect and promote these rights.”
Unfortunately, not just in the Andes, but throughout Latin America, social justice protests are increasingly met by disproportionate state surveillance. Amidst the region’s crises of legitimacy and weakening institutions, many governments are now spying on their own citizens at unprecedented levels. For example, we are seeing the broad use of facial recognition and other biometric surveillance technologies, the use of targeted spyware deployed against activists, and disproportionate access to personal records – all of which are contrary to international human rights standards. As a result, we are seeing now more than ever that the connection between political and social tensions and the State’s use of information technology are central to civil society’s agenda. Just as it has never been so important to understand the digital environment to understand what’s going on in the streets, it has never been so important to understand what’s going on in the streets, to understand the digital environment.
The state’s use of information technology to consolidate its power, could not, of course, be possible without the private sector, which is, in its own right, interested in amassing profit. This convergence of commercial and political interests has led digital rights organizations to develop sophisticated and comprehensive agendas centered around the principle of social justice and civic empowerment. To this end, we see new campaigns emerging on net neutrality, content moderation, use of AI by intelligence agencies, electronic voting, and technical control over copyright issues.
2019 has shown us that given rapid ongoing developments in information technology, and ever- evolving socio-political situations throughout Latin America, it would be naïve to say there is a fixed digital rights agenda for the year ahead. But to ensure robust responses to whatever threats do emerge online in 2020, Indela is launching its second open call to support the organizations fighting to protect digital rights in the region. Our aim at Indela is to supports organizations to build capacity and resilience, so that they can rise to meet the demands of protecting empowered, informed, participatory – and connected – spaces online. To talk about digital rights today is, now more than ever, to talk about human rights.
To learn more about Indela, visit our website and follow us on Twitter.
Interview with Carlos Cortés: You can not talk about rights in the physical space without digital rights
Carlos Cortés is the founder of Linterna Verde, a non-profit think-tank and a consultant on internet and society issues. Carlos was Twitter’s Public Policy manager for Latin America. He has advised international cooperation organizations on freedom of expression and internet policy. He has a law degree from Los Andes University (Colombia) and a Masters in Communications and Media Governance from the London School of Economics. He is a researcher on Internet policy issues at the Center for Freedom of Expression of Palermo University, Argentina. He currently directs the video blog La Mesa de Centro.
Why are digital rights important?
When we talk about digital rights we are not referring to an isolated and limited exercise in the online environment. Nowadays, the exercise of most of the rights in physical spaces -or analogical- depends and feeds on the possibilities of development in the online context.
In other words: without digital rights there are no analogical rights. Think, for example, of the right to protest, freedom of expression, privacy, or political participation. Without digital guarantees, we can hardly talk about the existence of active citizens.
From your perspective, what are the main challenges facing the digital rights ecosystem in Latin America?
There are as many challenges as there are issues, but if I had to place it in the regional ecosystem, the most important challenge arises from the tension between the role expected of the State and the distrust of the State. For example: we are concerned about the accumulation of data by private intermediaries. Should we then give surveillance tools to governments that have also abused their powers of inspection and control?
In the same way, we face the question of which problems we must resolve through the regulatory channel and which should be channeled through private or self-regulatory solutions.
Why is the digital rights context in Latin America unique?
Unlike other regions, and as is usual in our part of the world, Latin America tries to build all the floors of the house simultaneously -and often we start one without finishing the other.
We face questions about digital rights when we still have enormous challenges in terms of infrastructure, connectivity and digital literacy. Think, for example, of network neutrality. When we were still trying to guarantee this principle of public policy in fixed-line connections, the mobile Internet began to develop—and with it the consequent problem of ‘zero rating’. Our context is unique because we coexist and promote changes amid deep contrasts.
The Initiative for Digital Rights in Latin America (Indela) gladly announces the eight projects that have been selected in our first Open Call. Indela is a partnership strategically directed by Fundación Avina, Luminate and Open Society Foundations and supported by the Ford Foundation and the International Development Research Centre (IDRC).
We are very proud to support these innovative projects that will foster and protect digital rights in Latin America—such as freedom of expression, privacy, and access to knowledge— through public campaigns, impact work, applied research, and litigation.
Eight projects were selected out of 163 proposals from 20 Latin American countries. These projects will receive funding for 12 to 18 months (with possibility of extension), as well as specialized support to strengthen capacities associated with their projects.
Here are the selected projects and organizations:
- “Defending freedom of expression on the internet: online transparency and due process in view of censorship by content removal” by Article 19 Mexico and Central America is focused on Mexico, Central America, and the Caribbean Islands. Its purpose is to stop the State from removing content as a censorship practice and to have intermediaries link their policies and practices to human rights standards.
- “Electoral transparency: technology, safety, and regulation for speech in the electoral process” by the Karisma Foundation will focus on political parties, civil society and the media to make electoral processes in Colombia more transparent and safe. The project is based on an approach that acknowledges the importance of human rights and the responsible use of technology.
- “Filling the gap in digital rights for vulnerable populations in Peru” by Hiperderecho will identify and create collaboration strategies to help reduce online gender violence.
- “Incorporating safeguards, due process, and human rights standards into the use of biometric technologies for mass surveillance in Brazil” by InternetLab aims to discuss and qualify public safety policies by involving the legal community in the debate regarding due process and human rights standards in mass surveillance practices with biometric technologies in Brazil.
- “Building capacities based on a multi-stakeholder perspective in the internet ecosystem in Central America” by IPANDETEC will foster conversations about the intersection of human rights and technology in order to improve public policies and legislation related to privacy and the freedom of expression.
- “Access to justice for women surviving digital violence in Mexico”. With this project, Luchadoras will create tools related to fight online violence in to order to support women, policy makers, authorities, and decision makers.
- “Strategic litigation for digital rights in Latin America”, coordinated by R3D: Red en Defensa de los Derechos Digitales and the Center for Studies on Freedom of Expression (CELE), aims to generate favorable precedents for defending digital rights in Latin America through litigation in national courts and international bodies.
- “Strengthening protection of Personal Data in Paraguay among civil society: a multifaceted strategy” by TEDIC will incubate strategic litigation cases on the national and regional levels. The project will involve universities, capacity building, and ultimately foster public policies to support a strong digital rights system in Paraguay.