Interview with Rafael Zanatta
In October 2022, after months of campaigning, Brazilians cast their votes in a run- off election for the next president. Thanks to the diligence of activists, like Rafael Zanatta from Data Privacy Brasil, the country now has legal avenues and representatives of collective rights when issues related to digital influence arise during an election. Data Privacy Brasil works to ensure that civil society, technologists and the government are working together to consider the potential gains and losses associated with big tech. In this conversation, Rafael Zanatta and Varoon Bashyakarla discuss how popular media often misrepresents techno-authoritarianism, the legal reaction to data breaches in Brazil and why data protection is for everyone.
About the Speaker:
Rafael Zanatta is the executive director of Data Privacy Brasil Research Association, a civil society organization focused on data protection and fundamental rights based in São Paulo. Rafael holds a LLM in Law & Political Economy at the University of Turin and a Master of Science at the University of São Paulo Faculty of Law. He is a PhD candidate at the University of São Paulo, alumnus of the Institute for Information Law (IViR) at the University of Amsterdam and a former research fellow of the Institute for the Cooperative Digital Economy at The New School. Connect with Rafael on Twitter.
Listen to the audio:
Please note that this interview has been edited for clarity and brevity.
We are here chatting with Rafael Zanatta on Friday October 28th, 2022, which is notable because the second round of the Brazilian presidential election is being held in two days. This is the second round because in the first round of voting, no candidate won an outright majority, thus triggering a runoff.
The two candidates are Jair Bolsonaro – the current president and a far right former army captain – and Luiz Inácio Lula da Silva, who goes by Lula and was Brazil's president from 2003 to 2010. He was a former union leader and Brazil's first working class president. Amidst the war in Ukraine and inflation around the world, Brazil, like many other countries, has seen rising prices for food and fuel. The economy seems to be quite a big issue for a lot of voters.
During Lula’s presidency, the economy was relatively strong and he left office with an 83% approval rating. Afterward, however, he was embroiled in a corruption scandal involving money laundering, bribery, and a state-owned oil company, but he was later acquitted. He campaigned for the 2018 presidential position, but he hadn't been acquitted yet and Bolsonaro strode to victory.
Bolsonaro is an ardent admirer of former US president Donald Trump, but critics say that he has mismanaged the pandemic. He has a terrible environmental record, and deforestation in the Amazon has accelerated during his presidency. He has denied that Brazil has a special role to play in averting climate change and his critics say that he has undermined state institutions to the benefit of the private sector, which has backed him from the beginning. Supporters, on the other hand, say that he is a man of faith and he's been good for business.
Each candidate has quite a firm base, but much like we're seeing in a lot of the other countries around the world with first pass-the-post systems, this election seems like it's going to be decided by a small fraction of undecided voters and political tensions are high.
Rafael, is there anything you want to add to this quick snapshot?
I think you did a great job and I would add that Bolsonaro is a working class man and also connects with the lower classes in Brazil. So his populist style is different from Trump, in the sense that Trump at least appeared to be really rich. Bolsonaro was never a rich man. He was also always involved with medium class and militias, which are just those parallel groups of security in Brazil.
I didn't realize that Bolsonaro also has this working class appeal.
Let's shift from the election itself to your work: you are the director of Data Privacy Brazil. Tell us the back-story of the organization.
So our NGO is quite new. We were created two years ago, in the beginning of the pandemic. We are a group of activists that were already involved with many struggles for digital rights in Brazil and who have worked in different organizations before. I worked for the Brazilian Institute of Consumer Defense, which is a 35-year-old NGO.
I then worked for Internet Lab. I also worked for Getulio Vargas Foundation as a researcher. My colleague Bruno also worked for the research group on digital rights and access to information at the University of São Paulo. He also worked for the Brazilian Internet Steering Committee. And other folks came from Article 19 and the Alana Institute. So we had already known each other for a long time because we had many struggles together in the process of approving Marco Civil da Internet, the internet bill of rights, and the campaigns for the enactment of the data protection law, which was like a nine-year process to enact data protection legislation.
We realized that building a strong data protection culture based on social justice and asymmetries of power was just the beginning of a process that is much more complex and cultural. And one that involves making sure that this does not become something for the elites. Data protection should not be for white and upper class folks in Brazil - it should be for everyone. So we started the NGO believing that we could have meaningful partnerships with the public defenders’ offices in Brazil, and with other NGOs that are more grassroots. We always thought that we could bridge this technical knowledge on digital rights while also reshaping our own visions with the engagement of other partners.
When we deal with public defenders who are doing such great work in the periphery of Rio de Janeiro and we can engage with the community and know their perceptions on digital rights, it's also an opportunity for us to rethink what we believe is important. We try to take advantage of these engagement opportunities to also challenge our own beliefs.
Most of our projects really try to expand the conversation on digital rights to include the social justice vision. We want to go to other places and meet other people and and make sure that those strong social movements in Brazil can grab these legal frameworks and this knowledge on how to do activism and take it to their communities. We hope that one day digital rights will be everywhere.
I think that privacy shouldn't be something that's only available to those who can afford it. I also don't want to live in a world in which privacy is only available to rich people. So I think grounding all of this in a spirit of social justice is really nice.
Ten years ago, when I was a data protection researcher, we could count on our hands the number of people that identified with data protection. But what happened in Brazil is truly amazing. Today you can count thousands and thousands of privacy professionals that didn't exist five years ago, who are getting really good salaries at tech firms.
There is strong competition especially in the financial sector, because Brazil has strong experience in open finance and open banking and the Brazilian Central Bank Authority is doing a lot of work on making clear incentives for innovation in data protection in those sectors. So you see something fascinating, which is people being trained and doing great work to compete to get those positions in the private sector and to become leaders in their fields, and to try to implement privacy-by-design solutions.
Of course there are limitations, especially at big tech firms like Meta and others, because probably the big decisions will be made in San Francisco. But in other markets, especially digital ID markets or fin techs or credit scoring, where you have a lot of Brazilian tech firms, is that those Brazilian privacy experts are really making the decisions and they are making the difference. It's a process of market building in terms of data protection experts and privacy technology officers. Which didn't exist before and I think that can have really profound impacts on society in the next years.
Data Privacy Brazil has three main areas of research: one is in government and regulation, one is focused on asymmetries and power, and a third on digital markets and platforms. When we first formulated the Influence Industry Project at Tactical Tech back in 2017, one of the questions we asked ourselves was why does this datafication of politics around the world really matter?
One of the democratic consequences of datafication that we often came back to was how this mass collection of personal data—whether by the state or by private companies—really exacerbates existing power asymmetries. And that's quite similar to the framing of Data Privacy Brazil’s research on elections, disinformation and violations of data, with the goal of ending abusive tech practices and the misuse of data that advance undemocratic norms.
Can you tell us a little bit about what happened in this capacity in the most recent general elections back in 2018?
In 2018, I was the leader of the digital rights program at IDEC, the Brazilian Institute of Consumer Defense. We already had the coalition Direitos na Rede and we were doing a lot of work on elections and digital rights. But there was a failure in the analysis and the diagnosis, which was I think there was discussion on US and Cambridge Analytica and there was too much focus on Facebook.
At that time, we didn't yet have a data protection authority and we didn't have a data protection law enacted. And we were completely missing WhatsApp and TikTok and Huawei and other services. When Patrícia Campos Mello was a famous journalist and she published the initial reports on a whole group of firms using WhatsApp to spread this information and content produced by the Bolsonaro family, it was too late; the justice system was not prepared to deal with a case that was mostly about getting personal data from sources illegally.
You have someone inside a telecommunications firm or someone inside a bank that can sell you the database, and then you can use that database to make some basic categories of types of people or identify someone inside a WhatsApp group that can be spread the messages. This kind of really basic profiling—who are the message senders inside WhatsApp—was organically involved with the campaign.
But there was no legal remedy and no strategy from the civil society to do a counter movement on that. So that was in my head in 2020 when I began the organization. And when we were focusing on what we could do for this election in 2022, previously we partnered with Internet Lab and we did a lot of work on legal knowledge on the intersection between elections and data protection. So we set up a study group. We invited lawyers from the field of electoral law. We engaged with some justices of the electoral court and their staff. We did a report together and then we began changing the resolutions of the electoral court so there could be a strong dialogue with the data protection legislation.
Through those resolutions, the electoral court created a lot of norms prohibiting the automated sending of messages through WhatsApp. It also helped define a set of obligations for political parties and candidates to demonstrate that they had a legal basis for the processing of personal data and obligations during the campaign in the case of a citizen being able to ask the data protection officer of the campaign, “Where did you get my data from? Who are you sending my data to?” The electoral court did a brilliant work on listening into the civil society and doing public hearings and setting norms on the intersection between elections and data protection.
It was like a real effect on norm-making and resolutions focused on data protection and elections and having new rules about it. So what was uncovered in 2020 in legal terms became covered in 2022. But there was a problem of what to do if during the elections we get a report from a journalist saying that he observed that there was this illegal use of data and they were sending messages on WhatsApp with this information.
So we did a partnership with a group of journalists and Transparency International to uncover those stories. First of all, we needed to deal with the lack of resources for journalists. So one part of the project was about getting the money and incentives for journalists to work on that subject so we could get evidence.
And the other part of the project, which I'm coordinating, is the legal one, in which once we have the evidence that they are using your data illegally to send you messages on WhatsApp or target you on TikTok, we can quickly react through a legal action that we can file at the electoral court which is not about compensation. It's not about being compensated but it's about removing the illicit act. So you can get an order by a judge saying that that firm needs to stop using and delete the data, and it will be fined if it use the data again.
We filed a legal action against the governor of the state of São Paulo because he was using data from public policies to send illicit content about himself using a Google system – a Google pop-up on Android phones.
The second one was about the state of Paraná, which has a software program called Paraná Artificial Intelligence, in which citizens think they can get their driver's license, or schedule visits to a public hospital, or pay taxes. When you want to authenticate your phone number, you receive an SMS from from one specific number—5 2 8 3 7—this is the number from the government of Paraná to confirm that you have an appointment.
The state- owned company Celepar hired a telecommunication company called Algar, which is a big one to be responsible for sending the SMS. And what happened was that a Bolsonaro supporter inside Algar [Algar Telecommunications] managed to get inside the system and change the powers of the admin and create a new account. Once he created a new account, he was able to start sending messages to all the databases of the citizens of Paraná, starting almost at midnight, saying that if Bolsonaro did not win citizens should invade the supreme court.
This happened two weeks ago. I started receiving calls at 6am from journalists and friends saying something really problematic is going on in the state of Paraná – they are using the government database to send illegal SMS messages.
We partnered with the federal prosecutor's office and started providing evidence to them for a legal case at the electoral court. And they discovered that indeed it was an employee—a radicalized employee— that had access. Which was an extremely big violation of the data protection law, because this is of course a data bre
But it provided a big alarm because once again, the community was talking on WhatAapp and then what happens is that you have something trying to influence everyone using SMS. For instance, when my mom received this message, she thought it was from the government of her state, because it came from the official number.But then she realized, it's impossible. Why would the government send this type of message?
But maybe for those folks that are already radicalized, if you get this kind of message saying that something went wrong and you need to act, people will try to act. So they're trying to use these kind of techniques to put the country in flames. And of course they will try to use personal data to do that. And we are trying to do our best to quickly stop that from happening.
What we need is a social structure in which we can deal with it quickly, to stop harms occurring and also make sure that those folks are liable and that this is something that will not be accepted by society.
One of the things you mentioned is that you felt that in 2018, there was too much focus on the US, too much focus on Cambridge Analytica, and we felt very similarly. There was such a mass influx of funding towards monitoring Facebook, yet there's a whole influence industry as we've called it, beyond just Facebook itself, that's trying to leverage personal data for political purposes. And this story is interesting because it's a case of what we see time and time again of technologies circumventing the law.
To go back to this topic of the asymmetry of power, the datafication of politics really changes the ballgame when it comes to how we think about acquiring power and how we think about maintaining power, and when access to data is being abused as a means of maintaining power, suddenly all these democratic foundations are just inherently weakened.
So what happened in this case after it was found that a radicalized employee exploited their access to this data?
Based on the quick investigation that occurred the next day and the legal case at the electoral court, there was a court order to stop new SMS messages being sent from the number.
So they could start a data breach investigation and to provide information on why it occurred and what happened then we partnered with the federal prosecutor's office and the coalition Direitos na Rede and we published an open letter about the kind of harms that occurred in this case. And we supported the federal prosecutor's office to file a class action against Algar, because in our opinion it's also important to understand that this is a collective harm. This is destabilizing society and if we are too attached to legal concepts that are based on concrete harm or financial losses, this will not work at all.
In Brazil we have a very strong tradition of class actions since the 80s. We have a very strong constitutional tradition on collective rights and the role of the federal prosecutor's office. And we were claiming to the federal prosecutor's office that this could be an important legal decision in terms of providing the clear incentives for other firms to avoid that.
What is important in Brazil for this case is that the the compensation goes to a fund that is managed by the federal government and can be accessed to develop projects for social purposes. So imagine that you have a fine of $100,000,000 against Algar. These resources can be used next year to develop campaigns on privacy and ethics within those companies. Or you can have other types of social uses of the resources which can be also directed to society themselves. It's a very interesting model of punishment so to speak.
I often liken the way in which digital technologies threaten and even undermine our democracies very much the same way in which an oil company polluting the environment gets away with it. You know they internalize the asset and that class is externalized onto the public. And if we just think about these problems in strictly legal ways, we will overlook this large collective loss.
There are some people who say, what incentive do the people in power have to change these systems when they themselves are perhaps beneficiaries of what works?
But this is also a conversation about ethics as well. When the workers’ party lost the election in 2018, we had many meetings with the union leaders and the workers party and folks from the left, and they were like, “Should we also do what Bolsonaro is doing? Should we also invest a lot of money in micro-targeting and WhatsApp?”
And we were like, “No, you should work with us and improve laws to prevent that harm from happening and set up a social mechanism in which we can all observe and react quickly.” But don’t go on this race to the bottom. And I'm glad that many leaders of the left movement, had an ethical discussion about it. This is a profoundly ethical debate.
I think that's why these questions matter so much: we have elections to choose what kind of world we want to live in and if we're going to run a campaign that's completely at odds with the sort of governance and ethical system social principles we share with one another. That's a problem.
I think something that we are all perceiving this year and one of the outcomes of the project is that we thought we would find many more cases of illegal uses of data by the right – like big movements on WhatsApp or other software and platforms. But I think what is really occurring is a profound industry of digital influence that is mostly about selling a lifestyle, selling morality in terms of your deep values in life, like abortion, homosexuality and so on, which are always polemic, and people engage with that.
What we saw with the right wing was not so many cases of, let's get the databases from the telecommunication firms and try just send messages. But what is really mainstream is this whole network of influencers in churches and right-wing think tanks using Huawei and TikTok extremely well. And they are really, really professional in technical terms. They can use the camera in the right position. They have a good sound behind it. And they master the technique of engagement of attention and I think this has really been a game changer in this election.
Absolutely. There's been a shift from, “Let me get a celebrity endorsement” to “Let me recruit this eighteen year old who's a micro influencer who has a massive online audience.” I think these forms of influence are winning hearts and minds and are much softer, and in a sense trickier, for those of us who are trying to navigate this in a way that's protective of democratic principles.
Working on this topic can be very heavy. Especially writing about how digital technologies are undermining our democratic goals and principles. But this example that you just gave of this punishment that ends up being monetized in a fund that's then used to try to learn from or ameliorate the problem itself makes me think about some of the successes or some of the ways in which this trend towards techno-authoritarianism in Brazil can be pushed back against. Which is a term that Data Privacy Brazil has defined very thoughtfully, not so much as a switch to dictatorship overnight, but as a slow gradual loss of freedoms.
And on the topic of this radicalized government employee, it made me think about this decree that Bolsonaro signed, requiring telecoms to hand over data to the state’s statistical agency for 226,000,000 Brazilians’ data. But the supreme court ended up outlawing it as unconstitutional.
And then something that you just mentioned which is class action lawsuits. There was one in 2018 that stopped the metro from conducting facial analysis on the emotions of passengers in the metro. And one earlier this year from barring the use of facial recognition for public security reasons. I think that case cited the fact that there was absolutely no data protection assessment.
What other avenues or forms of resistance do you see as promising against this trend towards techno-authoritarianism in Brazil?
It's been almost three years since we started this project and there was a lot of learning as well during it. One avenue which for us was extremely important and had great consequences was moving away from federal legislation on data protection towards a very strong discourse on constitutional law.
I know that this is almost impossible in the US, because you have a very rigid and old constitution with few amendments. And now you have a composition of the Supreme Court with mostly conservatives and traditionalists. But Brazil has a very European culture of constitutional law in terms of being inspired by the Portuguese constitution and the German constitution and also German legal theory, in which the constitution can be perceived as a living legal text and can be expanded because of technological changes.
So in this case of Cadastro Base do Cidadão which was the federal system of interoperability of personal data, we partnered with the Brazilian Bar Association, who filed this constitutional case that we jumped into as Amicus curiae and we began producing research on some key concepts that could be used by the court. One of those concepts is informational separation of power, which is a German legal concept that means that it is unconstitutional to strengthen the capacity of one specific unit of the government to decide many things without due process and without slowing down. The way in which it can evaluate the purposes of the use of the data and if it's truly needed and if it's reasonable to use or fair to use or if there are other ways to not use that data because it's not needed.
This is interesting because it goes against the discourse of efficiency that Bolsonaro was trying to impose. Bolsonaro is really proud of the way that Brazil is digitizing the public sector. He is always claiming that Brazil is ranked in the top 5 of the Americas in terms of digitization processes.
So there's this very strong discourse on mixing and sharing all types of data in just one platform for the citizen, which might be great and indeed I had some examples in my life, like that it was easy to get my COVID results, or send my driver's license to another state. But obviously this is a problem when you have the capacity to merge such an amount of databases from public policies which are very rich in terms of the types of information.
What's happening now with the citizen registration database?
This is the the public policy that we were attacking, so to speak. The Supreme Court had a nice decision which was the Amicus curiae that we filed because the governance system of the federal registry databases was closed. There was no participation from civil society and no possibility for civil society to propose public interest tests that they could do in one unit of the government to see if they could fairly use the data of another unit of the government.
The Supreme Court recognized that the government cannot do this kind of data sharing and interoperability without democratic institutions. So this was very important because it was not like a “my data, my privacy” thing, it was a constitutional discussion about institutions and procedures that would make the the decision-making process more participatory. Which I think is truly important in terms of fighting techno-authoritarianism.
We need institutions that have spaces for diversity of opinion that might slow down some processes and might be able to impose some things that are boring and costly, because if you're only concerned with things that are cheaper and more efficient, it’s a really bad way towards authoritarianism.
Because of course technologies have certain capabilities to improve efficiency and decision-making processes. I really like the philosophy of Professor Mireille Hildebrandt from the Netherlands that just says, “democracy and rule of law must be slow, it is a feature by design and must be somehow boring and must be open to to diversity of opinion.”
We need to be careful on this techno-solutionism discourse on the efficiencies of the new technologies and how they might improve everything because we also need counter-movements precisely on that.
And we've grown so normalized to the cultural efficiency that tech products and services give us.
And I think there is another issue that we learned that was interesting. We began an investigation against one use of OSINT by the government, called Arpia Tech, which was a company founded by a former member of the intelligence agency of Brazil. He was complaining that Brazil did not have a good solution for open source intelligence to fight cyber crime, and that all the solutions were sold by Italian and Israeli firms.
So he decided to create a firm and to send this technology—with which they can navigate the web and the deep web and they can cross databases from Reddit Groups, IRC groups, Telegram groups, etc. And they can do profiling on potential threats in terms of illegal uses of databases, illegal selling of databases, and so on. But we were deeply concerned, because when the government said we want to use the technology, one of the elements they put in the public procurement was that the government wanted to have an open source intelligence capable of accessing Telegram messages. And we were like, “What what do you mean, you want to...”
And that's very invasive.
...And you want to have a software that can do profiling on researchers? And it was written in the procurement. So we supported five other NGOs to take this case to one of the federal courts that deals with public procurements. We entered into a big fight against the firm itself because we became the vocal leaders against it.
And then the CEO of the firm invited us to have a series of conversations, and it was really interesting because when we had the conversations, he did some workshops showing the solutions. There were a lot of problems of communication as well, like they were communicating really badly about some of the functioning of the software. For instance, when he was talking about profiling researchers I was asking, “Are you capable of profiling me? Are you capable of profiling a political science professor who is publishing papers against Bolsonaro?” And he was like, “No, absolutely not. We are looking at researchers that have evidence of databases for a black market of personal data. So we want to track some of those researches and folks that are researching that, to fulfill our database so we can think better about where to look. We are just targeting based on one specific methodology used by CIA.”
I was saying to him, you must agree with us that it's not our fault that we were campaigning against you. Because the way it was communicated and the way that we believe it works is awful. And and he he was like, “yeah,” and I think he was proposing that civil society should have more space and more opportunities to engage with those firms and especially to deal with open source intelligence products. Because of course in Brazil we're not against open source intelligence, because it can be used tremendously well for research, for journalists, for activism. But one of the issues that we were fighting in this project is this widespread use of open source intelligence without due process and without justifications for specific matters.
We were discussing it with Stephen Feldstein, who wrote the book “Digital Repression.” When you talk about Pegasus and those types of techniques to exploit devices, we know a lot about that because of the research done about that. We know how badly it can be used to target leaders and to kill people that are doing campaigns again against taxation of Coca-Cola in South America, and so on. But when we're talking about open source intelligence software like Arpia, which have certain types of capabilities of profiling, we do not yet have the knowledge and we do not have the whole map of the potential uses of that kind of technology and how they can harm society.
And this is a big challenge for us, because in the technical authoritarianism project in Brazil you do not have one big case of Pegasus, but you have cases that are more subtle. Like there are OSINT technologies that might be used for something really bad, but the government is claiming that they're not using them for something really bad —they are saying they need to fight organized crime in Foz do Iguaçu. Or to fight organized crime orchestrated with financial scams and financial crime, which of course in Brazil is a massive problem.
So I think Brazil has a specific complexity on techno-authoritarianism. It's more about potential risks and how to tackle those potential risks in open source intelligence software that are used by the Ministry of Justice and so on.
One quite universal phenomenon among a lot of digital rights organizations around the world are these general concerns about the ways in which they may be subjects or targets of their governments, or the ways in which they might have to censor themselves or adapt what they say to please the powers that be, whether those are tech companies funding them in some cases or whetherit’s not being too critical of the government.
I think this general slide towards techno-authoritarianism in Brazil is quite a global phenomenon in a lot of ways. I think it feels quite familiar to me as an American living in Europe, when I think about what's happened in the Philippines, or what's happening now in India, it is quite universal.
What lessons have you learned generally from talking to your counterparts working in other places about how they are navigating this slide towards techno-authoritarianism in their own context?
That's a great question, because when we started the project and we began using the concepts of techno-authoritarianism, we instantly perceived a lot of interest from Indian colleagues, from Bangladeshi colleagues, from Colombia, to engage in meaningful conversations about that. I think one issue that we were concerned with in the beginning of the project is the way we used the term “techno-authoritarianism” is not to describe and to separate countries that are democratic and authoritarian.
If you read a piece in Foreign Affairs about techno-authoritarianism from last year, they are using the concept to describe China and Russia. And from the beginning we stated that techno-authoritarianism is about describing techno- authoritarian practices within democracies, and it's mostly linked to this slow process of erosion of democracy and certain capabilities of technologies that might enhance authoritarian practices. So this is a problem of democracies as well.
We were trying not to be confused with those folks doing international relations, which are mostly concerned about labeling China and Russia as techno-authoritarian societies. Because the point is learning. Also in the experience of our democracies in the Global South, and this happens a lot in India, the true conflicts that exist in terms of public policies when you have a developing society trying to implement a welfare system, which is based on redistribution and trying to also foster certain types of economic markets and capabilities within their own countries.
When Brazil is trying to do this digital transformation, it’s not only concerned about providing services more easily and better for the citizens. The country is also trying to foster national tech firms that will occupy those markets. And we can see a lot of that in India as well, where they're trying to implement the biometric systems and projects, and also trying to combine the fostering of Indian tech firms and certain types of methodologies and knowledge that can be applied in those projects in the government. But many bad things can happen in terms of secondary use of data or a more authoritarian approach that might come from Modi and others. The legacy of that would have this profound infrastructure and capability which is fragile in democratic facts. When those facts fail and then an authoritarian leader comes up and can take advantage of those infrastructure and technology capabilities, what can happen?
But I wouldn't say that techno-authoritarianism or those problems of abuses are only in Brazil or India or Mexico, because we have many examples from the Netherlands completely abusing those systems and technologies for assessments of the population. And doing really bad things in terms of labeling and discriminating against migrants and others. Which could also be perceived as a problem of techno-authoritarianism, in that it is conduct by the government that is taking advantage of the use of personal data and technology. And it is limiting those fundamental rights of migrants in the territory of the Netherlands. And that's why the Netherlands is also having many discussions on the problem of automated decision-making. They're developing a human rights impact assessment and due diligence in order to counter that movement.
Can we as a global civil society formulate meaningful tools for impact assessment and participation that we can use in our communities? I think this is an interesting and fruitful opportunity for us. Because I'm eager to know how activists in Bangladesh are trying to define high risks for fundamental rights in their society and how they are trying to use mitigation mechanisms to those risks together with their government. And we can also use that in Brazil, for instance. So this is the kind of exchange that I think techno-authoritarianism also opens up.
I think you're right that techno-authoritarianism is ripe for a kind of coalition-building. Even though the solutions in Bangladesh might not apply to Brazil or the US, I think there are still learnings and relationships to be built between these causes.
Let me give you one example. Last year, we were investigating problems of digital IDs in Brazil. We learned from Kenyan partners that they were doing great work on activism in challenging the digital ID system in Kenya because of the lack of impact assessment done there.
We set up a closed meeting with the Ministry of Economy in Brazil and we invited activists from Kenya to talk, and they listened. So we need to bridge those policy communities. This is the kind of work that we want to do in the next years to make sure that the local government can listen to a meaningful learning in these kind of conversations about fundamental rights and datafication, to avoid the same mistakes.
Exactly - we should not be making the same mistakes everywhere. Especially when those mistakes often mean learning the hard way, like a compromise to some people's fundamental rights.
That's why we also need to think about a global prize for those staff and people from governments that are doing the right thing, because they also need incentives to do the right thing.
That's a great point because, I can imagine the people holding these government positions probably also feel a sense of doom and gloom as well. You know, frustration from the private company is frustration from civil society and I think, there has to be a bit of carrot and stick for these cases.
Yeah, and we're always pressuring them to make very difficult things, right? Because civil society is pushing on their door.
But I feel a lot of solidarity and compassion with them, like, it's hard to be in your shoes.
Yeah, the work they're doing is definitely not easy and feels if they're stuck in the web of stakeholders. And you're right, incentivizing them with some kind of recognition or distinction for engaging and learning from what's happening elsewhere—I mean, that's a win for us and it's a win for them.
Please note that this interview has been edited for clarity and brevity.
The influence industry is led since 2016 by Tactical Tech’s Data and Politics team addressing the pervasive data-driven technologies used by political groups within elections and political campaigns.
This interview was edited by Cassiane Cladis.