Professor Ruth Wodak is Emerita Distinguished Professor of Discourse Studies at Lancaster University, affiliated with the University of Vienna, and a member of the ECPS Advisory Board.

Professor Wodak: Autocracy Has Become a Global Economic Corporation Backed by Oligarchs and Social Media Power

In this powerful interview with ECPS, Professor Ruth Wodak warns that “autocracy has become a global economic corporation”—a transnational network where oligarchs, libertarians, and tech barons control discourse, distort truth, and undermine democracy. From Trump’s incitement of violence to Orbán’s fear-based migrant scapegoating, Professor Wodak outlines how authoritarian populists weaponize crises and social media to legitimize regressive policies. Yet she also defends the vital role of public intellectuals, urging them not to give in to “preemptive fear.” With deep insight into the politics of fear, techno-fascism, and discursive normalization, Professor Wodak’s reflections serve as both an alarm and a call to resistance in our increasingly volatile democratic landscape. A must-read for anyone grappling with today’s authoritarian turn.

Interview by Selcuk Gultasli

In a time when liberal democracies are increasingly challenged by authoritarian populism, far-right, disinformation, and escalating political violence, the voice of critical scholars has never been more urgent. In this in-depth interview with the European Center for Populism Studies (ECPS), Professor Ruth Wodak—Emerita Distinguished Professor of Discourse Studies at Lancaster University, affiliated with the University of Vienna, and a member of the ECPS Advisory Board—provides a sobering assessment of our contemporary moment. With decades of pioneering work on discourse, racism, and the far right, Professor Wodak, who is also one of the signatories of the International Declaration Against Fascism,” published on June 13, 2025, alongside Nobel laureates, public intellectuals, and leading scholars of democracy and authoritarianism, brings both scholarly rigor and moral clarity to an increasingly fraught public debate.

At the heart of this conversation lies a stark warning: “We are facing a kind of global kleptocracy and oligarchy that owns social media and is, in some cases, part of governments,” Professor Wodak says. Drawing on Anne Applebaum’s recent book Autocracy, Inc., she argues that autocracy has evolved into a global economic corporation—one where power, capital, and algorithmic control are intertwined and weaponized against democratic norms. This nexus, she explains, enables “very powerful individuals, libertarians, and oligarchs—supported by governments—to wield enormous influence.”

Professor Wodak also elaborates on what she calls the “politics of fear,” a strategy used by populist and authoritarian actors to exploit or fabricate crises in order to manufacture scapegoats and position themselves as national saviors. “It’s a very simple narrative,” she explains. “There is danger, someone is to blame, I am the savior, and I will eliminate the threat.” From Donald Trump’s MAGA slogan to Orbán’s anti-migrant rhetoric, such narratives are not only emotionally charged but “discursively effective in obscuring regressive agendas while appearing to restore order.”

The interview further explores how fascist traits—particularly state-sponsored or paramilitary violence—are resurfacing even in democratic societies. Professor Wodak points to cases in the United States, Germany, Turkey, and Greece as troubling examples. “We do see that the government in the US is taking very violent actions,” she warns, referring to ICE raids and militia-linked violence under Trump. Similarly, she notes how “Golden Dawn in Greece only became scandalized after the murder of a pop singer—despite its long history of violent attacks on migrants.”

Yet amid these challenges, Professor Wodak emphasizes the indispensable role of public intellectuals. Despite increasing hostility, she insists, “one shouldn’t be afraid to speak out.” Indeed, she urges scholars and citizens alike not to succumb to what she calls “preemptive fear,” which “leads you to accommodate to some kind of danger which you envision—but which is actually not there.”

In this urgent and wide-ranging dialogue, Professor Wodak offers a powerful analysis of how authoritarianism is being normalized—and how it can still be resisted.

Here is the transcript of our interview with Professor Ruth Wodak, edited lightly for readability.

Fascist Rhetoric and Violence Are Reemerging Across Democracies

Border Patrol agents monitor an anti-ICE protest in downtown Los Angeles, June 8, 2025. Demonstrators rallied against expanded ICE operations and in support of immigrant rights. Photo: Dreamstime.

Professor Ruth Wodak, thank you so very much for joining our interview series. Let me start right away with the first question: How do you interpret the contemporary resurgence of fascist traits in democratic societies, especially in light of the anti-fascism declaration you co-signed on June 13, 2025? In your view, what are the key discursive markers we should be most vigilant of, both conceptually and in concrete political communication? Could you provide some recent illustrative examples—from campaign speeches, media discourse, or policy debates—that exemplify these traits in action?

Professor Ruth Wodak: I think that’s a huge question. There are, unfortunately, many examples of what you explained just now and asked about. First, I would like to say that we should be careful when using the term fascism, because it always leads us to associate it with the 1930s, National Socialism, Mussolini, etc. So, we should be aware of what the main characteristics of fascism are, and one important point to mention is the existence of violence and paramilitary movements that support a fascist movement or government.

What we can observe right now is an increasing level of violence. For example, in the US, quite recently, there was violence in Los Angeles, where Immigration and Customs Enforcement (ICE), along with the National Guard, were called in by President Trump to apprehend so-called illegal migrants and deport them to camps in El Salvador and other South American countries. These camps, in some ways, resemble concentration camps. Most surprisingly and disturbingly, Donald Trump had photos taken of himself at one such camp, appearing to be proud of these actions.

Now, when we think back just a few years—if you remember Charlottesville and the riots that took place there due to the attempt to remove the statue of a so-called hero of the Confederacy—the Proud Boys, a truly fascist movement of young, mainly male supporters, killed a young woman. Trump then said, “Well, actually, both the protesters and the Proud Boys were to be seen in an equal way.” So, we do see that currently the government in the US is taking very violent actions. These are still visible as snapshots—yes, they are localized in places like Los Angeles or elsewhere; they’re not yet covering the entire country. But of course, this could be a sign of what is to come. I think it’s very dangerous. And if you look back, you asked me about speeches and rallies—there was a speech by Trump where a protester entered the rally, and Trump just said, “Beat him up.” So, you can really also observe a rhetoric that orders or supports people to implement violence.

But this is not only the case in the US—it’s just a case we are all very aware of. If you look at Turkey, for example, where the Mayor of Istanbul was taken into prison, we again see violence enacted by the government. It’s not as if he was taken to court, there was a trial, and then democratic procedures were followed. No—this mayor was simply taken to prison, and as far as I know, nobody knows how long he will remain there. I depend on the media—you know much more about this.

We also saw violence—though again, very localized—in Germany, where there is no significant fascist mass movement that we can observe, except for very small groups of neo-Nazis and identitarians. But we do see assassinations and attacks on prominent politicians. There was an attack on a Social Democratic politician before the election. There are attacks on Green politicians. A mayor was actually shot. So, this is all very disconcerting.

Moreover, if we look back a little further—if you remember Golden Dawn, which was clearly a paramilitary fascist movement that was very strong in Greece around 2010 and a bit later—they enacted a great deal of violence against migrants. In fact, this only became widely scandalized when they killed a well-known Greek pop singer. Then, suddenly, it was talked about. But Golden Dawn had long used symbols of fascism, and so forth.

So, there is a trend that is leading up to the violence we see enacted today. And of course, I don’t even want to talk about Russia, because there, violence against protesters or opposition politicians has been ongoing for decades.

We’re Witnessing the Rise of Techno-Fascist Capitalism

Elon Musk speaks at the Conservative Political Action Conference (CPAC) at the Gaylord National Resort & Convention Center on February 20, 2025, in Oxon Hill, Maryland. Photo: Andrew Harnik.

The declaration highlights “techno-fascist enthusiasts” among media barons. How do you conceptualize the role of digital platforms and algorithmic governance in sustaining what you call the “politics of fear”?

Professor Ruth Wodak: First, let me explain what the politics of fear is about. It refers to how political groups or parties instrumentalize existing crises—or exaggerate them—and sometimes even create crises artificially through fake news and disinformation. They use these crises to construct scenarios of threat and fear—very dystopian visions of decay, collapse, and terrible events looming ahead.

Then, the leader of the party or group—because there are also women who do this—presents him or herself as the savior. So, there is a kind of link: on the one hand, creating a fearful scenario, and on the other, projecting a vision or utopia where the savior will rescue the country and eliminate those deemed responsible for the crisis.

This strategy also involves the creation of scapegoats, because someone must be blamed—someone must be guilty of the major problems that exist. The identity of these scapegoats depends on the context. Sometimes they are Turks and migrants, sometimes Jews and Roma. It all depends on who is available to be constructed as the scapegoat.

In this way, the narrative becomes very simple: there is danger, someone is to blame, I am the savior, and I will eliminate the threat—then everything will be fine.

It’s a very simple narrative and a very simple argument. But many people who are currently very insecure—because of the polycrisis we are all experiencing—seem to be easily manipulated into believing such a narrative.

And now we come to social media, which plays a very big role in this manipulation and in this propaganda. If we think of the big social media networks—for example, X—and Elon Musk, who is obviously the richest man on earth, we have someone who owns such a vast platform and who can actually manipulate the content.

In this way, dangerous content and disinformation are widely distributed, while evidence and factual counter-narratives are either deleted or not distributed—or at least distributed far less. Beyond that, there are also trolls and bots who amplify this content even further. So the whole—I would say—secondary discourse world of social media is saturated with disinformation.

There isn’t enough counter-information. We do now have Bluesky, for example, which tries to counter X—quite successfully in some ways. Many people have switched from X to Bluesky as a form of protest. But still, X remains more powerful because it is backed by an enormous amount of money.

In that way, I would say, power and money are going hand in hand right now in a really unpredictable way. We haven’t experienced something like this for a long time. I would point to the Russian oligarchs after 1989—but that was more localized. Now we are facing a kind of global kleptocracy and oligarchy that owns social media and is, in some cases, part of governments.

I would also mention a recent and very interesting book by Anne Applebaum, Autocracy, Inc.—yes, “the corporation.” So autocracy has become a big economic corporation, because power is now linked to money and to specific groups of libertarians, very powerful individuals, and oligarchs who are supported by governments and who wield enormous power.

Fear of Losing Control and Status Fuels the Far Right’s Rise

White nationalists and counter protesters clash in during a rally that turned violent resulting in the death of one and multiple injuries in Charlottesville, VA on August 12, 2017. Photo: Kim Kelley-Wagner.

The declaration refers to fabricated enemies and the weaponization of security. How have right-wing populist actors used crisis narratives (e.g., migration, pandemics) to justify authoritarian measures?

Professor Ruth Wodak: We see that happening all the time. I mean, migration has become such an important agenda in this construction of fear. And looking at recent EU barometers, it’s actually quite interesting that other topics also generate a lot of fear but are not instrumentalized in the same way. The statistics show us that, for example, the fear of the cost-of-living crisis, the energy crisis, the climate crisis, and the fear of wars—yes, we have, for the first time since the 1990s and the Yugoslavian wars, a war very close to or even within Europe, namely Ukraine—all of that could also be talked about extensively and used to create fear. But it seems to be migration that is, for the far right, the so-called best agenda to be instrumentalized. And that is the case across the board. I mean, I cover especially the Austrian and German debates, but I also follow the French and British debates.

I just read The Guardian yesterday, where there were reports of anti-foreigner riots in Manchester and another city in the north of England, and I was really disturbed—because, as you know, I’ve lived in the UK for 12 years in the North, and I had never encountered anything like that. I mean, there’s xenophobia everywhere, yes, but to have these riots, which were triggered by far-right groups—this is really very scary in the UK.

And again, if you look at Austria, the extreme right, far-right party—the Austrian Freedom Party—has been leading the polls since 2022 and won the last national election. They are not in government, but right now they are still leading the polls. Their main agenda is constructing the fear of migrants, and it’s really a paradox because, on the one hand, it’s obvious that in all European countries—or all countries of the European Union—specialist workers and people with expertise in various professions are needed. There are special ways of allowing them to enter the various countries—special permits and so on—and, on the other hand, this fear of migration still seems to resonate strongly.

We have to ask why the construction of this scapegoat is so successful. And it’s especially—and again, not only—targeted at Muslim migrants. Because this fear of migrants has already been a huge manipulative device, so to speak, since as early as 1989. If you recall the fall of the Iron Curtain in 1989—the end of the so-called Eastern Bloc—you’ll remember that many people from Czechoslovakia, Hungary, Romania, and Poland entered the West—yes, the so-called West. The Freedom Party in Austria, and especially Jörg Haider—who was quite a charismatic and very clever rhetorician—constructed his entire agenda against foreigners and became very successful.

And that was not during a time of crisis. There was no economic crisis at that time. We did a big study back then, and we found that the discursive patterns used at that time are very similar to those used now—except that at the time, the migrants were white Christian people, and now we have Muslims coming from Syria, Iraq, or elsewhere as refugees. But basically, the discourse of exclusion is very similar.

And if you ask yourself what triggers this enormous fear, I think there are basically two—possibly many more—but two really important points. One is the fear of losing control, which has become salient in the context of the polycrisis, but was also very visible during the Brexit campaign. So, the fear of losing control—because so many people are coming—and then you don’t know what’s happening anymore. The slogan at that time in the UK, “Take back control,” was very successful.

The second big issue is the fear of losing your social welfare—all the benefits, your jobs—they will take things away. So you haven’t lost them yet, but you might lose out. It’s not just the people who have already lost out, as is often discussed. It’s the fear among the middle class and the lower middle class of losing their status, their benefits, their way of living. That also explains why, for example, in very rich countries like Austria, Denmark, Switzerland, and Sweden, the far right is so prominent.

Authoritarianism Thrives on Silence; Intellectuals Must Refuse It

The open letter evokes the historical memory of anti-fascist intellectuals in 1925. How do you see the role of public intellectuals and discourse scholars today in resisting what you have termed “shameless normalization”?

Professor Ruth Wodak: I think it’s a hard job, and it really is difficult to summarize—or even observe—what impact public intellectuals might have, because they are, of course, part of the elite—the so-called elite—that the far right is fundamentally campaigning against. So public intellectuals form a group that is not wanted by the far right.

That said, it’s really important that people speak out. And the more people do so—and are listened to, and their voices are heard in social media, newspapers, and so forth—the more others become aware that there is a different position, a counter-discourse. I believe that to be very important, even if it isn’t widely distributed by platforms like X or other major channels.

So the more people speak out, the better it is—and one shouldn’t be afraid of doing so. Of course, this really depends on where you live. If you are in a dictatorship or a classically authoritarian state, public intellectuals may have a very hard time—they might be imprisoned, as has happened, or even killed, as we see in countries like Russia or China. And if we look at Turkey, they are imprisoned—just like many journalists—so they are forced into exile and speak out from abroad.

But if you live in a country that still allows freedom of opinion and supports human rights and the Human Rights Charter, then it is even more important to speak out—because you have the right to do so. And you shouldn’t be afraid.

Personally, I’ve never been afraid to speak out. Of course, I’ve encountered a lot of opposition. I’m not liked by everyone—but I tell myself, I don’t have to be loved by everybody. I also see many colleagues in the US or in Germany who speak out—not only at conferences and in academic settings, but who also leave the ivory tower and engage with the public, speak in schools or wherever they’re invited.

And I believe that it’s very important not to be frightened preemptively, especially in countries where freedom of opinion exists, where you don’t have to fear imprisonment or worse. There’s no reason to silence yourself out of imagined fear. Preemptive fear is dangerous, because it makes you accommodate to a threat that you envision—but which may not actually be there.

So in that way, I encourage scholars and intellectuals who are able to speak out—to do so.

Slogans Like MAGA Obscure Regressive Agendas Through Nostalgia

A Trump supporter holds up a “Make America Great Again” sign at presidential candidate Donald Trump’s rally in the convention center in Sioux City, Iowa, on November 6, 2016. Photo: Mark Reinstein.

How would you analyze the role of Trump’s “Make America Great Again” narrative and the narratives of Erdoğan, Putin, Netanyahu, Modi, Orban etc.—not just rhetorically but also in terms of its affective and mobilizing power? What makes such slogans so resonant across diverse audiences, and how do they function discursively to both obscure and legitimate regressive political agendas?

Professor Ruth Wodak: These slogans—and I would say these are really slogans—
MAGA, for example, resonates because, as I already said, many people are afraid and feel insecure—legitimately so—because there are existential crises right now. And these slogans construct a past that seems to have been much better. I say “seems” because it never was much better. There were always problems, always crises, etc.

We once conducted a study that looked at all the crises the European Union had experienced up until 2009, and it clearly showed a continuity of crises. There were always crises, so you could say the EU was essentially moving from one crisis to the next.

It’s basically what Bauman calls “retrotopia”—a fantasy, an imaginary past that is perceived as better. Now, we can think about what Trump actually—or what he might—mean when pointing to such a past. And it’s quite obvious that the past being invoked might be the period before the civil rights movement—a time when traditional gender roles were still enforced, when there was no political correctness, and so forth.

So, a past that some people would really like to return to, or at least evoke again. But of course, this is impossible. We cannot turn the clock back, and in that sense, it remains a complete fantasy or imaginary. Yet it resonates—because there is so much nostalgia. There is nostalgia, there is a lot of anger, and there is also, as Eva Illouz puts it, a lot of love and patriotism. This imaginary—where “we all were together” in some kind of imagined white community in the US, where all these values were still upheld—resonates strongly.

The same applies, of course, in other contexts, where one has to look at the specific historical elements that are being invoked.

Meloni’s Soft Fascism Balances Between Brussels and Trump

Italian Prime Minister Giorgia Meloni and EU Commission President Ursula von der Leyen meet in Brussels, Belgium on November 03, 2022. Photo: Alexandros Michailidis.

And lastly, Professor Wodak, you’ve researched the discourse of the European far right extensively. How would you compare the current discourse strategies of far-right and populist actors in Austria or Hungary with those in the United States, Turkey, or India?

Professor Ruth Wodak: I have not researched Turkey and India extensively, because I don’t speak the languages. And it is, of course, for us as linguists and discourse analysts, always important to look at the original texts—visual, written, oral—because we need to understand all the nuances, the intonation, the latent meanings, and so forth. So translation is not enough.

But if I now speak about what is reported, what I can read about, and if I look—as I already cited examples from the US—there is a difference between these authoritarian or neo-authoritarian countries and the still liberal democracies.

So, for example, if we look at Italy, where Giorgia Meloni is leading the government and comes from a fascist movement which she claims to have left completely, we see an example of soft fascism. She balances between the EU—she is still also a friend of Trump—but she wants the EU funds to continue, so she negotiates in a nice way with von der Leyen and with the European People’s Party. She is for Ukraine and against Russia, and so forth. So there are many interesting positions. But in the actual domestic policies in Italy, her party attacks journalists. There are attacks on press freedom, freedom of opinion, freedom of assembly, and so forth.

However, civil society in Italy is very strong, so this is also being resisted. And this marks a difference—at least in some ways—from Hungary, where Orbán has really implemented an authoritarian state. But there too, civil society and the opposition are now growing. So it’s not clear what will happen in the elections next year, because there is a conservative opposition party led by Magyar, which has been leading the polls for several months.

And if we look at Austria and Germany again, this kind of explicit, violent speech would not be possible—or at least, when it occurs, it is scandalized. Certain politicians might say such things, but they are often suspended from their parties, especially if they make statements that invoke the fascist past. There are strict laws against that, and those laws are enforced. You cannot use these symbols or rhetoric freely.

Whereas—and this marks a major difference from the US—Trump openly violates such laws, human rights norms, and taboos, and yet there is comparatively little opposition—not the kind we see here. So I think the difference lies in EU legislation and national contexts, where violence and the breaking of taboos are still scandalized, prohibited, and prosecuted—unlike in countries where the government can break these taboos and act unlawfully, and it seems everything goes.

Chloé Ridel, Member of the European Parliament from the Socialist Group and Rapporteur for transnational repression, during her interview with ECPS’s Selcuk Gultasli. Photo: Umit Vurel.

EP Rapporteur Ridel: EU Should Expand Sanctions Regime to Effectively Target Transnational Repression

In an exclusive interview with ECPS, MEP Chloé Ridel, rapporteur for the European Parliament’s forthcoming report on transnational repression, underscores the urgent need for the EU to confront transnational repression—state-organized efforts by authoritarian regimes such as Russia, China, Turkey, and Iran to silence critics abroad. Ridel calls for expanding the EU’s Global Human Rights Sanctions Regime to explicitly include transnational repression and highlights the procedural challenge posed by unanimity voting: “The only people we manage to sanction are mostly Russian… we will have difficulties applying the values we believe in.” She stresses that this is a human rights, security, and democratic issue requiring coordination, oversight of enablers, and stronger protection for vulnerable groups.

Interview by Selcuk Gultasli

In a context of intensifying authoritarian encroachment beyond national borders, transnational repression has emerged as a growing threat to Europe’s democratic integrity, sovereignty, and human rights commitments. Authoritarian regimes—including Russia, China, Turkey, and Iran—have refined techniques of intimidation and control targeting exiles, dissidents, and diaspora communities residing in democratic states, employing legal tools such as Interpol Red Notices, coercion-by-proxy against relatives, and increasingly sophisticated forms of digital harassment. In her capacity as rapporteur for the European Parliament’s forthcoming report on transnational repression, MEP Chloé Ridel of the Socialists and Democrats Group has foregrounded the urgency of a robust, coordinated European response.

In this interview with the European Center for Populism Studies (ECPS), MEP Ridel makes a compelling case for expanding the EU’s Global Human Rights Sanctions Regime to address transnational repression explicitly. She explains that “there is already an EU sanctions regime that exists, and we want this regime to also apply to states that commit transnational repression.” MEP Ridel’s recommendation is clear: the EU must recognize transnational repression as a distinct pattern of authoritarian interference, codify it in sanctions policy, and ensure it can be enforced consistently across Member States.

MEP Ridel is also critical of the procedural obstacles that blunt the effectiveness of EU sanctions, pointing to the unanimity requirement that has resulted in skewed enforcement patterns: “The only people we manage to sanction are mostly Russian; 70% of those sanctioned under the EU sanctions regime are from Russia.” Without reforms enabling qualified majority voting for sanctions decisions, she warns, “we will have difficulties applying the values we believe in on human rights.”

This approach, MEP Ridel emphasizes, is inseparable from broader efforts to coordinate intelligence, protect vulnerable groups such as women, human rights defenders, and hold enablers—particularly social media platforms—accountable. “States rely on enablers such as social media platforms and spyware businesses, and these enablers must also be held accountable,” she argues. In advocating for expert focal points on transnational repression in both EU delegations and national administrations, Ridel calls for the EU to develop institutional expertise to “help victims of transnational repression” who often “don’t even know they are victims” until attacked.

This interview provides an incisive analysis of the tools and frameworks required to confront transnational repression effectively. EP rapporteur Ridel’s proposals offer a principled roadmap for embedding human rights and democratic sovereignty at the heart of EU foreign and security policy.

Chloé Ridel, Member of the European Parliament from the Socialist Group and EP Rapporteur for transnational repression.

Here is the transcript of our interview with MEP Chloé Ridel, edited lightly for readability.

Transnational Repression Must Be Defined Properly

Chloé Ridel, thank you very much for joining our interview series. First of all, can you please tell us about the fate of the report? You submitted it to the subcommittee on human rights. What will happen next?

MEP Chloé Ridel: I submitted my draft report in June to the Committee on Human Rights, which is a subcommittee of the Foreign Affairs Committee here in the European Parliament. Time was then allowed for other political groups to table amendments, which will be discussed throughout September. We will have a vote in the Foreign Affairs Committee in October, followed by the final vote in the plenary session of the European Parliament at the end of November.

Your draft report acknowledges the lack of a universally accepted definition of transnational repression. How should the EU conceptualize this phenomenon in legal and policy terms, especially considering the practices of regimes like Turkey, Iran, China, and Russia, to ensure both legal precision and operational flexibility?

MEP Chloé Ridel: Yes, you’re absolutely right. There is no definition of transnational repression in EU law or international law. Recently, the UN adopted a definition for transnational repression because it’s a growing phenomenon, as I tried to describe in my report. So, I suggest an EU definition for transnational repression because if we don’t know what it is, we cannot fight it properly. We define transnational repression as “state-organized actions that cross borders to coerce, control, or silence individuals through physical, legal, or digital means.”

This is a growing and quite concentrated phenomenon: 80% of all transnational repression actions are committed by just 10 states. Among these states are China, Turkey, Iran, Tajikistan, Uzbekistan, Belarus, and Russia. It is committed by authoritarian regimes that seek to silence members of their diaspora, political opponents, or journalists. For the EU, it’s a significant challenge because it constitutes foreign interference, is a security matter, and targets human rights defenders whom we have an interest in protecting — and we are not doing enough to protect them.

Transnational Repression Is a Security Issue, a Human Rights Issue, and a Democratic Issue

Given that transnational repression by authoritarian and repressive regimes blurs the lines between external authoritarian influence and internal security threats, should the EU frame this challenge primarily as a human rights issue, a security concern, or a hybrid phenomenon demanding an integrated policy response?

MEP Chloé Ridel: I think this is all of it at once. It is a human rights issue, of course, because it targets human rights defenders, and I will return to that. It is also a security and sovereignty issue because we cannot accept that foreign authoritarian regimes come to our streets to threaten people who are legal residents and under our protection. It becomes a threat to us as well. Unfortunately, transnational repression continues to occur in Europe. It includes physical threats, poisoning, and digital surveillance. Take the example of the Russian diaspora: after the invasion of Ukraine in 2022, more than 90 Russian journalist agencies came to Europe to continue their work freely. It is in our interest to protect these journalists because they are the last free Russian journalists in the world, and they still speak to the Russian people. We know that Putin expands his power by controlling people’s minds, and if we want to fight this kind of war with Putin, we need free journalism that can still speak to the Russian people back home. So yes, it is a security issue, a human rights issue, and also a democratic issue.

EU’s Digital Services Act Must Hold Platforms Accountable

Authoritarian regimes including China, Russia, Turkey and Iran have weaponized digital platforms to target exiles. How can the EU ensure that the Digital Services Act is effectively enforced to mitigate these risks, particularly protecting vulnerable groups like women human rights defenders from online harassment orchestrated by authoritarian actors?

MEP Chloé Ridel: As you pointed out very well, digital transnational repression is growing and authoritarian regimes use social media to harass opponents, often targeting women, sometimes through the circulation of sexualized content. This is a specific and growing form of violence, and social media platforms are enabling it. They are not doing enough to prevent transnational repression online, and they should. By companies, I mean the platforms themselves, because they now constitute major public spaces where public debate happens. 

We need rules because we cannot have such impactful public spaces controlled by private companies without oversight. In Europe, we voted for a strong legislative framework, the Digital Services Act (DSA), but we are still waiting for it to be effectively enforced. For example, an investigation was opened against X (formerly Twitter) two years ago, but there are still no conclusions or sanctions, despite clear violations of the DSA—there is no content moderation, widespread disinformation, and manipulation of algorithms to boost certain types of content. We need effective enforcement of the DSA to hold these big companies accountable. While transnational repression is state-organized, states rely on enablers, such as social media platforms and spyware businesses, and these enablers must also be held accountable for that repression.

Iran, Egypt, Turkey and Tajikistan are notorious for coercion-by-proxy, targeting relatives of exiles to silence dissent abroad. What practical measures can the EU adopt to recognize, document, and respond to this diffuse and intimate form of repression?

MEP Chloé Ridel: You are right, transnational repression can occur when authoritarian regimes target family members who remain in the home country while someone goes abroad to seek exile or refuge. And it’s very difficult. Currently, the EU does not protect family members who may be threatened by authoritarian regimes simply because they are related to a prominent human rights defender or similar figure. So, I think we should enable the EU program called ProtectDefenders.eu to also protect family members of a defender, not just the defender themselves, because we know that authoritarian regimes use threats against family to repress human rights defenders.

A Coordinated EU Response Needed to Stop Abuse of Red Notices

Given that Turkey, Russia, and China systematically abuse Interpol Red Notices and extradition treaties to pursue political exiles, what reforms should the EU promote within its judicial cooperation frameworks and at Interpol to prevent instrumentalization while safeguarding legitimate law enforcement cooperation?

MEP Chloé Ridel: The abuse of Interpol Red Notices is a very important matter for me, and it’s a key part of my report because, as you mentioned, even though Interpol is aware of abuses, the problem persists. Authoritarian regimes continue to send or request Red Notices against human rights defenders, even though these notices are supposedly intended to target terrorists or very serious criminals. For example, there are currently more than 200 Red Notices from Tajikistan targeting human rights defenders living in the EU. So, we need to raise awareness among member states not to arrest or execute these notices and to develop a coordinated EU response on this issue.

In my report, I suggest that transnational repression be included in Europol’s mandate, so that Europol can assess the relevance of Red Notices when they target human rights defenders and provide assessments to member states, exerting pressure on national governments not to execute abusive Red Notices. 

For example, there was the case of an Iranian activist in Italy in 2017 who was arrested based on a Red Notice from Iran and later freed. There was also the case of Paul Watson, an environmental activist defending whales, who was targeted by a Red Notice from Japan. He could live freely in France and Germany but was ultimately arrested by Denmark based on this Red Notice. What kind of coordination is this? He was eventually freed, but only after months in jail, and clearly the Red Notice against him was abusive.

We need to stop this abuse, and one way to do so is to involve the EU—not by giving the EU the power to execute Red Notices, which remains a national competence—but by enabling it to assess and declare when a Red Notice is abusive and should not be executed.

“We Must Coordinate at EU Level to Tackle Transnational Repression”

Chloé Ridel, Member of the European Parliament from the Socialist Group and EP Rapporteur  for transnational repression.

How can the EU promote harmonization of national legal frameworks to ensure that no Member State becomes a permissive jurisdiction or “safe haven” for authoritarian actors from regimes such as Belarus or Egypt, while respecting national sovereignty and legal diversity?

MEP Chloé Ridel: We need more coordination at the EU level to tackle transnational repression. Transnational repression should be more widely discussed among ministers of internal affairs, security, foreign affairs, and heads of state as well. It was discussed recently at the G7 forum, and it is a matter for all democracies because we can see a kind of authoritarian internationalism building itself, notably through transnational repression, where authoritarian regimes help each other control, coerce, and silence their political opposition. We have an interest in protecting this political opposition because they are sometimes the last free voices of civil society in some countries.

We need to do more to coordinate at the European level and to raise awareness at the European level. Sometimes I have noticed during my work on this report that security services have difficulty assessing and recognizing transnational repression. So, I suggested in my report having an expert on transnational repression in each security administration in each member state—a contact point or something like that. For instance, this exists in Canada, Australia, and the US, where they have teams specifically responsible for transnational repression involving many different ministries. It is important that we build expertise within each nation on transnational repression and that all of this be coordinated at the European level.

Oppressive states like Turkey and China often use religious, cultural, and educational institutions abroad as instruments of covert surveillance and influence. How should EU policy distinguish and regulate these activities to protect democratic norms without stigmatizing legitimate diaspora engagement?

MEP Chloé Ridel: Sure. I think this is indeed a problem, and we need to control funds that go to religious institutions, for instance. We cannot allow authoritarian or adverse regimes to fund NGOs or religious institutions on European soil without oversight of how those funds are used. This is something we must address because it’s a growing and concerning phenomenon.

Prevent Discrimination and Promote Integration to Counter Radicalization

Given that authoritarian regimes actively manipulate divisions within diasporas—for example, Turkey’s polarization of Turkish communities in Europe—how should EU integration and anti-radicalization strategies respond to these fractures to avoid inadvertently amplifying authoritarian influence?

MEP Chloé Ridel: This is another topic—it’s not transnational repression per se, but rather manipulation of diasporas to harm a country or create conflict. We can see it in my country, France, where there is a Turkish association called the Grey Wolves, a very dangerous group that was ultimately banned and dissolved. It’s an example because there were violent demonstrations by members of this group, and they exerted a kind of control over the Turkish diaspora in France, dictating how they should behave, which also prevented good integration into French society for Turkish immigrants.

We need to fight back and have state solutions against such extremist associations. We must also ensure that public services and integration services—through work, language learning, and civic values—are available so that we can prevent radicalization. Radicalization happens when there is discrimination; extremists target marginalized people and say, “France is discriminating against you, it doesn’t want you here, so you should abide by this ideology instead.” To prevent that, we must prevent discrimination and ensure that these individuals feel part of the national community in Europe. It’s a matter of integration to fight radicalization, and also a security matter: to be able to identify and prohibit such associations and groups when they form, if they are dangerous.

Coordinated EU Action Key to Protecting Rights Defenders

Your report recommends focal points on human rights defenders within EU delegations. What skills, mandates, and resources will these officers need to respond effectively to transnational repression, particularly from aggressive regimes like Russia and China, in high-risk environments?

MEP Chloé Ridel: It’s important that we have contact points in every EU delegation throughout the world that can gather information on how authoritarian regimes exert transnational repression. Coordination is key to fighting this growing phenomenon. We need contact points both in EU delegations and in each national administration, and through the exchange of information we can tackle it. We are stronger together in Europe; if we gather information and experts across different countries and Europe plays a coordinating role, we can collect valuable intelligence and help victims of transnational repression. Sometimes they don’t even know they are victims—there are people being followed or surveilled until the day they are attacked; people who have spyware on their phones and don’t know it because they are not trained in cybersecurity.

In my report, I want to incentivize and raise awareness about spyware, to emphasize that, first, the EU should ban the export of spyware technologies produced in Europe to authoritarian regimes, because we know this will be used against us and against human rights defenders on our soil. We should also, when we know that a human rights defender is at risk of transnational repression, say: come to our office, we will explain a few security rules to you as a matter of prevention so you can regularly check your phones or computers to ensure there is no spyware, because nowadays it’s very easy to hide spyware on a phone or computer.

And lastly, do you see a role for the EU in spearheading an international legal instrument specifically addressing transnational repression, modeled on Magnitsky-style sanctions, to confront regimes such as Belarus, Saudi Arabia, China and Turkey? How might this enhance global accountability and norm-setting?

Chloé Ridel: Actually, there is already an EU sanctions regime that exists and we want this regime to also apply to states that commit transnational repression. So, I call in my report to enlarge the EU sanctions regime so that it effectively targets transnational repression. There is also a longstanding demand from our group, the Socialists and Democrats group, that EU sanctions be decided by qualified majority and not unanimity, because with unanimity, often you go nowhere. The only people we manage to sanction are mostly Russian; 70% of those sanctioned under the EU sanctions regime are from Russia. I’m sure many more countries I’ve described to you could be sanctioned in the name of human rights and under the EU sanctions regime. So, if we don’t move toward qualified majority, we will have difficulties applying the values we believe in on human rights. 

Alina Utrata, Murat Aktaş, Luana Mathias Souto and Matilde Bufano explore how artificial intelligence, digital infrastructures, and Big Tech influence democratic participation, redefine 'the people,' and challenge gender rights and state foundations in the digital age.

ECPS Conference 2025 / Panel 2 — “The People” in the Age of AI and Algorithms

Please cite as:
ECPS Staff. (2025). “ECPS Conference 2025 / Panel 2 — “The People” in the Age of AI and Algorithms.” European Center for Populism Studies (ECPS). July 8, 2025. https://doi.org/10.55271/rp00104

 

Panel II: “‘The People’ in the Age of AI and Algorithms” explored how digital technologies and algorithmic infrastructures are reshaping democratic life. Co-chaired by Dr. Alina Utrata and Professor Murat Aktaş, the session tackled questions of power, exclusion, and political agency in the digital age. Together, their framing set the stage for two timely papers examining how algorithmic filtering, platform capitalism, and gendered data practices increasingly mediate who is counted—and who is excluded—from “the people.” With insight and urgency, the session called for renewed civic, academic, and regulatory engagement with the democratic challenges posed by artificial intelligence and transnational tech governance.

Reported by ECPS Staff

As our technological age accelerates, democracy finds itself in an increasingly precarious position—buffeted not only by illiberal politics but also by opaque digital infrastructures that quietly shape how “the people” see themselves and others. Panel II, titled “The People in the Age of AI and Algorithms,” explored how artificial intelligence, social media, and digital governance are reconfiguring the foundations of democratic life. Far from being neutral tools, these technologies actively structure political subjectivity, reshape the boundaries of inclusion and exclusion, and deepen existing inequalities—often with little accountability.

This timely and incisive session of the ECPS Conference at the University of Oxford, held under the title “‘We, the People’ and the Future of Democracy: Interdisciplinary Approaches” between July 1-3, 2025, was co-chaired by Dr. Alina Utrata, Career Development Research Fellow at the Rothermere American Institute and St John’s College, Oxford University, and Professor Murat Aktaş from the Department of Political Science at Muş Alparslan University, Turkey. Together, they provided complementary perspectives that grounded the panel in both international political theory and real-world geopolitical shifts.

Dr. Alina Utrata opened the session by noting how technology corporations—many based in the United States and particularly in Silicon Valley—play a crucial role in shaping today’s political landscape. Referencing recent headlines such as Jeff Bezos’s wedding, she pointed to the growing entanglement between cloud computing, satellite systems, and global power dynamics. She emphasized the importance of discussing AI in this context, particularly given the intense debates currently taking place in academia and beyond. Her remarks framed the session as an opportunity to critically engage with timely questions about artificial intelligence and digital sovereignty, and she welcomed the speakers’ contributions to what she described as “these thorny questions.”

Professor Murat Aktaş, in his opening remarks, thanked the ECPS team and contributors, describing the panel topic as seemingly narrow but in fact deeply relevant. He observed that humanity is undergoing profound changes and challenges, particularly through digitalization, automation, and artificial intelligence. These developments, he suggested, are reshaping not only our daily lives but also the future of society. By underlining the transformative impact of these technologies, Aktaş stressed the importance of discussing them seriously in this panel.

The panel brought together two compelling papers that tackled these questions from interdisciplinary and intersectional perspectives. Dr. Luana Mathias Souto examined how digital infrastructures exacerbate gender exclusion under the guise of neutrality, while Matilde Bufano explored the political dangers of AI-powered filter bubbles and the rise of the “Broliarchy”—a new digital oligarchy with profound implications for democratic governance.

Together, the co-chairs and presenters animated a rich discussion about how emerging technologies are not only transforming democratic participation but also reshaping the very concept of “the people.”

Dr. Luana Mathias Souto: Navigating Digital Disruptions — The Ambiguous Role of Digital Technologies, State Foundations and Gender Rights

In her powerful presentation, Dr. Luana Mathias Souto (Marie Skłodowska-Curie Postdoctoral Fellow, GenTIC, Universitat Oberta de Catalunya) analyzed how digital technologies, often portrayed as neutral and empowering, are increasingly used as instruments of exclusion, surveillance, and patriarchal control—especially targeting women.

In her compelling presentation, Dr. Luana Mathias Souto, a Marie Skłodowska-Curie Postdoctoral Fellow at the GenTIC Research Group, Universitat Oberta de Catalunya, examined how digital technologies—often framed as neutral tools of empowerment—are increasingly functioning as mechanisms for exclusion, surveillance, and patriarchal reinforcement, particularly against women. Her ongoing research critically interrogates how the foundational elements of statehood—sovereignty, territory, and people—are being redefined by the digital age in ways that intersect with illiberal ideologies and gender-based exclusion.

Dr. Souto opened by historicizing the exclusion of women from the category of “the people,” a structural pattern dating back centuries, and argued that this exclusion is not alleviated but rather exacerbated in the digital era. Drawing from feminist critiques and Global South scholarship, she explored how data flows and digital infrastructures decouple sovereignty from territoriality, complicating legal protections for individuals across borders. The concept of “digital sovereignty,” she noted, allows powerful private actors—particularly US-based tech giants—to co-govern people’s lives without accountability or democratic oversight. This dynamic renders traditional state functions increasingly porous and contested, especially in terms of enforcing regulations like the EU’s GDPR against surveillance practices rooted in the US legal and security regime.

Central to Dr. Souto’s argument is the idea that digital fragmentation not only challenges state sovereignty but also disrupts the cohesion of the political subject—the “people.” This fragmentation is manifested in what she called “divisible individuals,” where digital identities are reduced to segmented data profiles, often shaped by discriminatory algorithms. Despite the proclaimed neutrality of data, these systems encode longstanding social biases, particularly around gender. Dr. Souto emphasized how digital infrastructures—designed predominantly by male, white technocrats—perpetuate sexist norms and deepen women’s exclusion from political recognition.

She devoted particular attention to FemTech (female technology), highlighting apps that track menstruation, ovulation, and sexual activity. While marketed as tools of empowerment, Dr. Souto argued these technologies facilitate new forms of surveillance and control over women’s bodies. With the overturning of Roe v. Wade in the US, data from such apps have reportedly been used in criminal investigations against women seeking abortions. Similar practices have emerged in the UK, where antiquated laws are invoked to justify digital searches of women’s phones. Beyond legal threats, FemTech data has also been exploited in employment contexts, where employers potentially use reproductive data to make discriminatory decisions about hiring or promotions.

Dr. Souto linked these practices to broader alliances between tech elites and anti-gender, illiberal movements. By promoting patriarchal values under the guise of neutrality and innovation, tech companies offer a platform for regressive gender ideologies to take root. This fusion of technological governance with far-right agendas—exemplified by calls for “masculine energy” in Silicon Valley—is not incidental but part of a broader effort to rebrand traditional hierarchies within supposedly apolitical spaces.

In conclusion, Dr. Souto called for a fundamental challenge to the presumed neutrality of digital technologies. She argued that reclaiming democratic space requires recognizing how digital infrastructures actively shape who is counted as part of “the people”—and who is excluded. Without such critical engagement, the digital revolution risks reinforcing the very forms of patriarchal and illiberal governance it once promised to transcend.

Matilde Bufano: The Role of AI in Shaping the People — Big Tech and the Broliarchy’s Influence on Modern Democracy

In a thought-provoking presentation, Matilde Bufano (MSc, International Security Studies, Sant’Anna School of Advanced Studies / University of Trento) explored the complex interplay between AI, social media infrastructures, and the weakening of democratic norms in the era of Big Tech.

In a sobering and richly analytical presentation, Matilde Bufano, MSc in International Security Studies at the Sant’Anna School of Advanced Studies and the University of Trento, examined the deeply intertwined relationship between artificial intelligence (AI), social media infrastructures, and the erosion of democratic norms in the age of Big Tech. Her paper, “The Role of AI in Shaping the People: Big Tech and the Broliarchy’s Influence on Modern Democracy,” offered a timely, practice-oriented reflection on how algorithmic technologies—far from being neutral tools—play a crucial role in shaping public consciousness, manipulating democratic engagement, and amplifying societal polarization. Drawing from her dual background in international law and digital politics, Bufano delivered a cross-disciplinary critique that challenged both policy complacency and academic detachment in the face of AI-driven democratic disruption.

At the heart of Bufano’s analysis lies a powerful assertion: democracy is not only threatened from outside by illiberal regimes or authoritarian populism, but also from within, through the algorithmic architecture of digital platforms that increasingly mediate how citizens engage with one another and with politics. The COVID-19 pandemic, according to Bufano, marked an inflection point. As physical interaction gave way to a digital public sphere, citizens became more dependent than ever on technology for information, identity, and even emotional validation. This shift coincided with an intensification of algorithmic curation, wherein AI systems selectively filter, promote, or suppress information based on user behavior and platform profitability.

Bufano focused on two key mechanisms underpinning this dynamic: algorithmic filtering and algorithmic moderation. Algorithmic filtering sorts through vast quantities of online content using coded preferences—ostensibly for user relevance, but in practice to optimize engagement and advertising revenue. This results in the formation of “filter bubbles,” echo chambers where users are continually exposed to like-minded content, reinforcing existing beliefs and psychological biases. Bufano distinguished between collaborative filtering—which groups users based on shared demographics or behavioral traits—and content-based filtering, which recommends material similar to what a user has previously interacted with. Both reinforce a feedback loop of ideological reinforcement, generating a form of identity-based gratification that discourages critical engagement and cross-cutting dialogue.

Crucially, this personalization is not politically neutral. Bufano demonstrated how algorithmic design often prioritizes sensationalist and polarizing content—particularly disinformation—because of its virality and ability to prolong user attention. Ninety percent of disinformation, she argued, is constructed around out-group hatred. In this context, algorithmically curated media environments deepen societal cleavages, producing a form of affective polarization that goes beyond ideological disagreement and encourages personal animosity and even dehumanization of political opponents. This is especially visible in contexts of crisis, such as during the pandemic, when scapegoating of Asian communities proliferated through local Facebook groups, or in the use of conspiracy theories and “phantom mastermind” narratives to channel social discontent toward imagined enemies.

The political consequences of this trend are severe. Filter bubbles inhibit democratic deliberation and increase susceptibility to manipulation by foreign and domestic actors. Bufano cited examples such as Russian disinformation campaigns in Romania, illustrating how AI-driven social media platforms can serve as conduits for election interference, especially when publics are already fragmented and mistrustful of institutions. These risks are magnified by a dramatic rollback in fact-checking infrastructures—most notably in the United States, where 80% of such systems were dismantled after Trump’s presidency, and mirrored in countries like Spain.

Bufano introduced the concept of the Broliarchy—a portmanteau of “bro” and “oligarchy”—to describe the growing political influence of a narrow cadre of male tech billionaires who control the infrastructure of digital discourse. No longer confined to private enterprise, these actors now exert direct influence on public policy and regulation, blurring the boundary between democratic governance and corporate interest. She illustrated this with the example of Elon Musk’s acquisition of Twitter (now X), which led to a 50% increase in hate speech within weeks due to weakened content moderation policies. Such developments, Bufano warned, compromise democratic accountability and entrench anti-democratic values under the guise of free expression and innovation.

While Bufano acknowledged the European Union’s recent steps toward regulation—especially the Digital Services Act (DSA), which seeks to promote transparency and safety in content recommendation systems—she emphasized the limitations of regional legislation in a global digital ecosystem. AI remains a “black box,” inaccessible to users and regulators alike. Without global accountability frameworks, national or regional efforts risk being outpaced by platform evolution and cross-border data flows.

In conclusion, Bufano made a dual appeal. First, for institutional and legal reforms capable of subjecting algorithmic systems to democratic oversight, including mandatory transparency in how recommender systems operate. Second, for renewed civic engagement and media literacy among citizens themselves. Democracy, she reminded the audience, cannot be fully outsourced to algorithms or regulators. It requires a culture of critical reflection and active participation—both online and offline. Reclaiming this space from the Broliarchy, she argued, means not only resisting disinformation and polarization, but reimagining democratic communication in ways that are inclusive, pluralistic, and resistant to both technological and ideological capture.

Bufano’s presentation, blending empirical insight with normative urgency, underscored the need for interdisciplinary collaboration in addressing one of the most urgent challenges of our time: how to ensure that digital technologies serve, rather than subvert, the democratic ideal.

Conclusion

Panel II of the ECPS Conference 2025, “The People in the Age of AI and Algorithms,” offered a powerful and urgent exploration of how digital infrastructures are reshaping the foundations of democratic life. As the presenters compellingly demonstrated, artificial intelligence, algorithmic governance, and platform capitalism are not passive tools but active agents that shape political subjectivities, influence public opinion, and determine who is included in or excluded from the category of “the people.” Across both presentations, a clear throughline emerged: digital technologies, while often framed in terms of neutrality and innovation, are in fact deeply embedded in structures of inequality, bias, and elite power.

Dr. Luana Mathias Souto illuminated how digital technologies intersect with patriarchal norms to undermine gender rights and state sovereignty, showing how the global tech ecosystem facilitates new forms of surveillance and control over women. Matilde Bufano, in turn, unpacked the algorithmic logic behind political polarization and democratic backsliding, naming the emergence of the “Broliarchy” as a key actor in this process. Together, their insights revealed a troubling paradox: while democracy should enable broad participation and dissent, the very platforms that now mediate political life often amplify exclusion and entrench concentrated power.

Rather than offering despair, the panel ended on a call to action. Both speakers urged the need for democratic oversight, global regulation, and enhanced digital literacy to reclaim public space and political agency in the algorithmic age. As AI technologies continue to evolve, so too must our frameworks for accountability, inclusion, and democratic resilience.


 

Note: To experience the panel’s dynamic and thought-provoking Q&A session, we encourage you to watch the full video recording above.

Erdogan

The Transnational Diffusion of Digital Authoritarianism: From Moscow and Beijing to Ankara

DOWNLOAD PDF

Please cite as:
Yilmaz, Ihsan; Mamouri, Ali; Morieson, Nicholas & Omer, Muhammad. (2025). “The Transnational Diffusion of Digital Authoritarianism: From Moscow and Beijing to Ankara.” European Center for Populism Studies (ECPS). May 12, 2025. https://doi.org/10.55271/rp0098



This report examines how Turkey has become a paradigmatic case of digital authoritarian convergence through the mechanisms of learning, emulation, and cooperative interdependence. Drawing on Chinese and Russian models—and facilitated by Western and Chinese tech companies—Turkey has adopted sophisticated digital control strategies across legal, surveillance, and information domains. The study identifies how strategic partnerships, infrastructure agreements (e.g., Huawei’s 5G and smart city projects), and shared authoritarian logics have enabled the Erdoğan regime to suppress dissent and reshape the digital public sphere. Through legal reforms, deep packet inspection (DPI) technologies, and coordinated digital propaganda, Turkey exemplifies how authoritarian digital governance diffuses globally. The findings highlight an urgent need for international accountability, cyber norms, and ethical tech governance to contain the expanding influence of digital repression.

By Ihsan Yilmaz, Ali Mamouri*, Nicholas Morieson & Muhammad Omer**

Executive Summary

This research explores the diffusion of digital authoritarian practices in Turkey as a prominent example of the Muslim world, focusing on the three mechanisms of learning, emulation, and cooperative interdependence, covering four main domains: Legal frameworks, Internet censorship, urban surveillance, and Strategic Digital Information Operations (SDIOs). The study covers both internal and external diffusion based on a wide range of sources. These include domestic precedents, examples from authoritarian regimes like China and Russia, and the role of Western companies in spreading digital authoritarian practices.

The study had several findings. The key findings are detailed below:

Learning: Turkey, like other regional countries that experienced public unrest, has learned from previous experience in order to impose power and control on people using different digital capabilities. Countries like China and Russia played significant roles in this learning process across the region, including in Turkey. The research highlights the importance of both internal learning from past protest movements and external influences from state and non-state actors.

Emulation: Authoritarian regimes in Turkey and across the Muslim world have emulated China and Russia’s internet governance models in all four aforementioned domains. The Turkish government has developed its own surveillance and censorship techniques, influenced by the experiences of authoritarian states and bolstered by training and technology transfers from China and Russia, and certain western companies.

Cooperative interdependence: Turkey’s economic challenges have led it to forge closer ties with China, particularly through the Belt and Road Initiative (BRI). This cooperation often comes with financial incentives, promoting the adoption of China’s digital governance practices, including urban surveillance systems and censorship technologies.

Role of private technology companies: Western companies have played a significant role in facilitating the spread of digital authoritarianism, often operating independently of their governments’ policies. Companies like Sandvine and NSO Group have provided tools that support the Turkish government’s digital control strategies, contributing to a complex landscape of censorship and surveillance.

Diffusion of SDIOs: The diffusion process of digital authoritarian practice is not limited to importing and using digital technologies. It also includes the spreading of legal frameworks to restrict digital freedom and also running Strategic Digital Information Operations (SDIOs), including state propaganda and conspiracy theories that China and Russia had a significant role in.

Based on these findings, the study proposes several recommendations to counteract the spread of digital authoritarian practices:

– Strengthening international cyber norms and regulations to define and regulate digital governance, particularly in countries with strong ties to the West.

– Enhancing support for digital rights and privacy protections by advocating for comprehensive laws and supporting civil society organizations in Turkey.

– Encouraging responsible corporate behavior among technology firms to ensure compliance with human rights standards.

– Fostering regional and global cooperation on digital freedom to counter digital authoritarianism through joint initiatives and technical assistance.

– Leveraging economic incentives to promote ethical technology use and partnerships with human rights-aligned providers.

– Using strategic diplomatic channels to encourage Turkey to adopt responsible surveillance practices and align with global digital governance norms.

The research illustrates the dynamics of digital authoritarianism in Turkey, revealing a complex interplay of emulation, learning, and economic incentives that facilitate the spread of censorship and surveillance practices. The findings underscore the need for international cooperation and proactive measures to safeguard digital freedoms in an increasingly authoritarian digital landscape.

Photo: Hannu Viitanen.

Introduction

Research suggests that a significant number of countries in the Muslim world, specifically those in the Middle East, are often characterized by authoritarian governance (Durac & Cavatorta, 2022; Yenigun, 2021; Stepan et al., 2018; Yilmaz, 2021; 2025). The rise of the internet and social media during the late 2000s provided immense capacities to civil society and individual activists in the Muslim world. This development burst into political action during the late 2000s and the early 2010s in the instances of the Gezi protests in Turkey and other examples in the region, including the Green Movement in Iran and the Arab Spring protests across the Arab world (Iosifidis & Wheeler, 2015; Demirhan, 2014; Lynch, 2011; Gheytanchi, 2016). 

The fact that the protesters in all these cases have extensively used the internet and associated technologies (e.g., social media, digital messaging, and navigation) has led many observers to declare the latter as ‘liberation technology’ due to their role in facilitating anti-government movements across non-democratic countries (Diamond & Plattner, 2012; Ziccardi, 2012). Advocates of the internet as a liberation tool have also pointed to enhanced social capacity to mobilize and organize through the spread of dramatic videos and images, instigating attitudinal change, and countering government monopoly over the production and dissemination of information (Breuer, 2012; Ruijgrok, 2017). These qualities have been seen as giving the internet an equalizing power between the state and society. In the early 2000s, when the Internet and social media were spreading across the developing world, authoritarian governments were generally unable to control the digital sphere; they lacked the technical expertise and the digital infrastructure to curb the internet. So, they typically relied on completely shutting it down (Cattle, 2015; Gunitsky, 2020).

However, authoritarian regimes gradually learned how to use the digital space for empowering their control on the society and have even started using it for transnational repression and sharp power (Yilmaz, 2025, Yilmaz et al., 2024; Yilmaz, Akbarzadeh & Bashirov 2023; Yilmaz, Morieson & Shakil, 2025; Yilmaz & Shakil, 2024). Scholars such as Sunstein (2009) and Negroponte (1996) have warned against the capacity of the internet to fragment the public sphere into separate echo chambers and thus fundamentally impede ‘deliberative democracy,’ which is supposed to be based on debates of ideas and exchange of views. 

Furthermore, the breakthroughs in deep learning, neural network, and machine learning, together with the widespread use of the internet, have accelerated the growth of artificial intelligence (AI), providing more capability to authoritarian regimes to impose control on people. In a Pew poll, almost half of the respondents believed that the ‘use of [modern] technology will mostly weaken core aspects of democracy and democratic representation in the next decade’ (Anderson & Rainie, 2020). This pessimism is driven by an unprecedented degree of surveillance and digital control brought forward by digital technologies, undermining central notions of freedom, individuality, autonomy, and rationality at the center of deliberative democracy (Radavoi, 2019; Stone et al., 2016; Bostrom, 2014; Helbing et al., 2019; Damnjanović, 2015). Tools of the governments to digitally repress democracy include smart surveillance using facial recognition applications, targeted censorship, disinformation and misinformation campaigns, and cyber-attacks and hacking (Feldstein, 2019).

Research as to how digital technologies such as high-speed internet, social media, AI, and big data affect, enable or disable democracy, human rights, freedom, and electoral process is in its infancy (Gardels & Berggruen, 2019; Margetts, 2013; Papacharissi, 2009). Further, most of this scant literature is focused on Western democracies. The existing literature on Muslim-majority countries is mostly focused on traditional social media (Jenzen et al., 2021; Wheeler, 2017; Tusa, 2013). This is despite the fact that extensive digital capabilities, especially AI and big data, offer governments of these countries the capabilities to exert control over their citizens, with disastrous outcomes for democracy. Indeed, we may be facing the rise of a new type of authoritarian rule: digital authoritarianism, that is, ‘the use of digital information technology by authoritarian regimes to surveil, repress, and manipulate domestic and foreign populations’ (Polyakova & Meserole, 2019; see also Ahmed et al., 2024; Akbarzadeh et al., 2024; 2025).

With the expansion of the internet in developing countries, authoritarian governments derive a similar benefit from technological leapfrogging with the capacity to selectively implement new surveillance and control mechanisms from the burgeoning supply of market-ready advanced AI and big-data-enabled applications. As one internet pioneer foreshadowed to Pew “by 2030, as much of 75% of the world’s population will be enslaved by AI-based surveillance systems developed in China and exported around the world” (Anderson & Rainie, 2020). Developing countries often experience technological leapfrogging; they shift to advanced technologies directly, skipping the middle, more expensive and less efficient stages because modern technologies, by the time of their implementation within those countries, become more economical and effective than the initial technology. This leapfrogging is demonstrated via the adoption lifecycle of mobile phones to that of landlines. It took less than 17 years, from the early 2000s to 2017, for mobile phones to be extensively adopted in Turkey, from 25% to 96%. (Our World in Data, 2021).

After the crises of the early 2010s, both democratic and authoritarian regimes worldwide started to invest heavily in sophisticated equipment and expertise to monitor, analyze, and ultimately crack down on online and offline dissent (Aziz & Beydoun, 2020; Feldstein, 2021). In addition to curtailing independent speech and activism online, authoritarian regimes have sought to deceive and manipulate digital environments in order to shape their citizens’ views. They have flooded the digital realm with propaganda narratives using trolls, bots, and influencers under their control (Tan, 2020). 

More importantly, thanks to authoritarian diffusion, governments in developing countries are learning from and emulating the experiences of their peers of surveillance technologies such as China and Russia. However, there has been limited research on the political mechanisms through which such digital authoritarian practices spread. Against this backdrop, this report examines the mechanisms through which digital authoritarian practices diffuse in Turkey as an example of authoritarian regimes in the Middle East. We ask: What kind of authoritarian practices have the governments enacted in the digital realm? How have these practices diffused across the region? To address these questions systematically, we develop an analytical framework that examines the mechanisms of diffusion of digital authoritarian practices. Our framework identifies three mechanisms of diffusion: emulation, learning, and cooperative interdependence. We focus on four groups of digital authoritarian practices: legal frameworks, Internet censorship, urban surveillance, and Strategic Digital Information Operations (SDIOs). We aim to show how emulation, learning and cooperative interdependence take place in each of these four digital authoritarian practices. In addition to the above, the report will explore the international dimension of this phenomenon, discovering how Western companies, in addition to totalitarian systems like Russia and China, played a role in empowering the Turkish government to claim the digital space. 

We first discuss our analytical framework which integrates the scholarship of digital authoritarian practices and authoritarian diffusion, and explain the concepts of learning, emulation, and as prominent diffusion mechanisms. We then move to the empirical section where we first identify convergent outcomes that are comparable between earlier and later adopters and then we will elucidate the mechanisms through which the diffusion process occurred by showing contact points and plausible channels through which decision-makers were able to adopt from one another.

Analytical Framework

To explore the phenomenon of diffusion, we follow best practices laid out in the literature (see Ambrosio, 2010; Ambrosio & Tolstrup, 2019; Bank & Weyland, 2020). We begin by identifying convergent outcomes that are comparable between earlier and later adopters. As part of this, we will also establish feasible connections between the two parties, which may take the form of physical proximity, trade linkages, membership in international organizations, bilateral arrangements, historical ties, cultural similarities, or shared language. Then, we will elucidate the mechanisms through which the diffusion process occurred by identifying contact points and plausible channels through which decision-makers were able to adopt from one another. 

We will follow three good practices that have been advised by scholars (e.g., Ambrosio & Tolstrup, 2019; Strang & Soule, 1998; Gilardi, 2010; 2012). First, we adopt a comparative design that involves four middle powers (see Strang & Soule, 1998). There are important similarities and differences among the four cases that make comparison a useful exercise. Second, we provide extensive data to showcase the workings of diffusion mechanisms despite the challenge of working on authoritarian settings. As Ambrosio and Tolstrup (2019: 2752) noted, “the relevant evidence needed can be hard to acquire in authoritarian settings.” It is much more likely to gain access to strong evidence in liberal democratic settings where much of the current diffusion research has accumulated. Our article contributes to the literature on diffusion in authoritarian settings with Turkey as a prominent example. Finally, we provide smoking gun evidence based on several leaked documents to support our assertions. 

In the empirical section, we follow the convention (see Ambrosio & Tolstrup, 2019) and start with identifying convergent outcomes among the major political actors in regard to the practices of restrictive legal frameworks, Internet censorship, urban surveillance and SDIOs. This section involves demonstrating the items that have been diffused between earlier and later adopters. Not only is there a substantial amount of similarity between the practices among these political systems, but also, we show a temporal sequence between earlier and later adopters that point at convergence. 

We then move on to explain plausible mechanisms of diffusion, following the model provided by Bashirov et al. (2025): Learning, Emulation, and Cooperative Interdependence. It’s important to highlight from the outset that these three mechanisms functioned together in Turkey settings. As was observed in other settings (see Sharman, 2008), it is not feasible to examine the impact of these mechanisms independently. Instead of existing as separate entities or operating in a simple additive manner, these mechanisms are inherently interconnected, and they do overlap. We follow this understanding in our empirical analysis and discuss how each mechanism worked in tandem with other mechanisms.

Types of Digital Authoritarianism

Illustration: Shutterstock / Skorzewiak.

We identified four main domains of digital authoritarianism in general, and examples of them could be found in Turkey’s case as well.

Restrictive Legal Frameworks

The legal framework includes a variety of practices. We identified the following:

1- Laws that mandated internet service providers to establish a system allowing real-time monitoring and recording of traffic on their networks. These legislations mandated internet service providers to establish a system allowing real-time monitoring and recording of traffic on their networks (Privacy International, 2019). Moreover, all censorship laws refer to national security and terrorism as vague criteria to enforce widespread censorship of undesirable content. In Turkey, a Presidential decree (No 671) in 2016 granted the government extensive power to restrict internet access, block websites, and censor media (IHD, 2017). Under the decree, telecommunications companies are required to comply with any government orders within two hours of receiving them. In recent years, the Turkish government also prosecuted thousands of people for criticizing President Erdogan or his government in print or on social media (Freedom House, 2021).

2- Laws that have converged around penalization of online speech, referring to concepts such as national identity, culture, and defamation. It is hard to miss similarities between the laws in Turkey among other regional countries and those enacted in China earlier. In 2013, China’s Supreme People’s Court issued a legal interpretation that expanded the scope of the crime of defamation to include information shared on the internet (Human Rights Watch, 2013). In 2022, the Turkish Parliament passed new legislation that criminalized “disseminating false information,” punishable by one to three years in prison, and increased government control over online news websites. Article 23 of the law was particularly controversial as it stated that “Any person who publicly disseminates untrue information concerning the internal and external security, public order and public health of the country with the sole intention of creating anxiety, fear or panic among the public, and in a manner likely to disturb public peace, shall be sentenced to imprisonment from one year to three years” (Human Rights Watch, 2022). This clearly shows the pattern of diffusion from China and Russia by leaving vague and broad provisions of what constitutes “national security,” “peace” and “order” (Weber, 2021: 170-171; Yilmaz, Caman & Bashirov, 2020; Yilmaz, Shipoli & Demir, 2023; Yilmaz & Shipoli, 2022). 

3-  Laws that ban or restrict the use of VPNs following China and Russia’s lead. In Turkey, VPNs are legal, but many of their servers and websites are blocked. China banned unauthorized VPN use in 2017 in a new Cybersecurity Law. Russia introduced a similar ban the same year. The Information and Communication Technologies Authority (BTK), national telecommunications regulatory and inspection authority of Turkey, issued a blocking order targeting 16 Virtual Private Networks (VPNs). These VPNs, including TunnelBear, Proton, and Psiphon, are popular tools used by audiences seeking to access news websites critical of the government.

While entirely banning VPN access remains a challenge, governments can employ Deep Packet Inspection (DPI) technology to identify and throttle VPN traffic. Countries like Iran, China, and Russia are indulging in such practices. Users in Iran and Turkey, for example, have reported extensive blockage of VPN apps and websites since 2021. Engaging in efforts to access blocked content through a VPN can potentially result in imprisonment (Danao & Venz, 2023). Simon Migliano, research head at Top10VPN.com, acknowledges that blocking VPN websites in Turkey makes it harder to download and sign up for new services. Moreover, individual VPN providers like Hide.me, SecureVPN, and Surfshark confirm technical difficulties for their users in Turkey. Proton, on the other hand, maintains that their services haven’t been completely blocked. 

As such, the report “Freedom on the Net 2023” by Freedom House (2023) reflects the aforesaid harsh reality, ranking Turkey as “not free” in terms of internet access and freedom of expression. However, it is worth noting that the Turkish government’s censorship efforts are met with a determined citizenry. Audiences, even young schoolchildren according to Ozturan (2023), have become adept at using VPNs to access banned content. Media outlets themselves sometimes promote VPNs to help their audiences bypass restrictions. Examples abound: VOA Turkish and Deutsche Welle (DW), upon being blocked, directed their audiences towards Psiphon, Proton, and nthLink to access their broadcasts. Diken, a prominent news website, even maintains a dedicated “VPN News” section offering access to censored content dating back to 2014. 

4- Laws that tighten control on social media companies. While Western social media platforms remain accessible in Turkey, in recent years the government has introduced similar laws and regulations that increase their grip over the content shared on these platforms. They do so by threatening the social media companies with bandwidth restrictions and outright bans if they fail to comply with the governments’ requests. Moreover, in 2020, the Turkish Parliament passed a new law that mandated tech giants such as Facebook and Twitter (now X) to appoint representatives in Turkey for handling complaints related to the content on their platforms. Companies that decline to assign an official representative have been subject to fines, advertising prohibitions, and bandwidth restrictions that would render their networks unusable due to slow internet speeds. Facebook complied with the law in 2021 and assigned a legal entity in Turkey after refusing to do so the previous year (Bilginsoy, 2021). 

Since the early 2010s, many countries in the region including Turkey have enacted a series of legal reforms that converged around similar concepts and restrictions. As Table 1 shows, these laws follow the Chinese and Russian laws in temporal order. The table makes a comparison with some other countries in the region as well, in order to see Turkey’s position in this field.

Internet Shutdown

All governments in the region have resorted to shutting down the internet as a simple solution over the past 20 years, mostly during the times of mass protests, social unrest or military operations. In Turkey, in 2015, access to Facebook, Twitter and YouTube as well as 166 other websites were blocked when an image of a Turkish prosecutor held at gunpoint was circulated online. The internet was also cut off multiple times during the  July 15, 2016 coup attempt, as well as during the Turkish military’s operations in the Southeastern regions of the country. In many instances, the government has used bandwidth throttling to deny its citizens access to the internet. However, internet shutdown is costly as it affects the delivery of essential public and private services and has been dubbed as the Dictator’s Digital Dilemma. Therefore, even when it is practiced, the shutdown is limited to a certain location, mostly a city or a region, and would typically last only few days. According to Access Now (2022), an internet rights organization, no internet shutdown has taken place in Turkey in 2021. 

Given the high cost of switching off the internet and thanks to the rise of sophisticated technologies to filter, manipulate and re-direct internet content, censorship has become a more widely used digital authoritarian practice over the last decade. Countries have converged on the use of DPI technology. DPI is “a type of data processing that looks in detail at the contents of the data being sent, and re-routes it accordingly” (Geere, 2012). DPI inspects the data being sent over a network and may take various forms of actions, such as logging the content and alerting, as well as blocking or re-rerouting the traffic. DPI allows comprehensive network analysis. While it can be used for innocuous purposes, such as checking the content for viruses and ensuring the correct supply of content, it can also be used for digital eavesdropping, internet censorship, and even stealing sensitive information (Bendrath & Mueller, 2011).

Countries across the Muslim world including Turkey started in the mid-2010s to acquire DPI technology from Western and Chinese companies who have become important sources of diffusion. US-Canadian company Sandvine/Procera has provided DPI surveillance equipment to national networks operating in Turkey (Turk Telekom). This system operates over connections between an internet site and the target user and allows the government to tamper with the data sent through an unencrypted network (HTTP vs. HTTPS). Sandvine and its parent company Francisco Partners emerged at the center of the diffusion of DPI technology in the Middle East. Recent revelations show that the company has played significant role in facilitating the spread of ideas between countries. Through their information campaign, Sandvine contributed to learning by governments. As such, Sandvine and Netsweeper’s prominent engagement in provision of spying technology shows that it is not merely Chinese companies that enable digital authoritarianism. Western companies have been just as active.

Turkey made its first purchase from Sandvine (then Procera) in 2014 after the Gezi protests and corruption investigations rocked the AKP government the previous year. The government later used these devices to block websites, including Wikipedia, and those belonging to unwanted entities, such as independent news outlets and certain opposition groups in later years. The governments in the region including Turkey have gathered widespread spying and phishing capabilities sourced from mostly Western companies. For example, in Turkey, FinFisher used FinSpy in 2017 on a Turkish website disguised as the campaign website for the Turkish opposition movement and enabled the surveillance of political activists and journalists. FinSpy allowed the MIT to locate people, monitor phone calls and chats and mobile phone and computer data (ECCHR, 2023). This could link in with our discussion in emulation more clearly as well regarding private companies being key actors (Marczak et al., 2018).

Urban Surveillance

Three high-definition video surveillance cameras operated by the city police. Photo: Dreamstime.

With the advance of CCTV and AI technology, urban surveillance capabilities have grown exponentially over the past ten years. Dubbed as “safe” or “smart” cities, these urban surveillance projects are “mainly concerned with automating the policing of society using video cameras and other digital technologies to monitor and diagnose “suspicious behavior” (Kynge et al., 2021). The concept of Smart city captures an entire range of ICT capabilities implemented in an urban area. This might start with the simple goal of bringing internet connectivity and providing electronic payment solutions for basic services and evolve to establishing AI-controlled surveillance systems, as we have seen in many Chinese cities (Zeng, 2020). Smart cities deploy a host of ICT—including high-speed communication networks, sensors, and mobile phone apps—to boost mobility and connectivity, supercharge the digital economy, increase energy efficiency, improve the delivery of services, and generally raise the level of their residents’ welfare (Hong, 2022). The “smart” concept generally involves gathering large amounts of data to enhance various city functions. This can include optimizing the use of utilities and other services, reducing traffic congestion and pollution, and ultimately empowering both public authorities and residents.

The rapid development of smart city infrastructures across world has led to controversies as critics argued that the surveillance technology enables pervasive collection, retention, and misuse of personal data by everything from law enforcement agencies to private companies. Moreover, in recent years, China has been a major promoter of the ‘safe city’ concept that focuses on surveillance-driven policing of urban environments – a practice that has been perfected in most Chinese cities (Triolo, 2020). Several Chinese companies have been at the forefront of China’s effort to export its model of safe city: Huawei, ZTE Corporation, Hangzhou Hikvision Digital Technology, Zhejiang Dahua Technology, Alibaba, and Tiandy (Yan, 2019).

China has been a significant exporter of surveillance technology worldwide, including to countries like Turkey. Chinese firms such as Hikvision and Dahua have supplied surveillance equipment, including facial recognition systems, to various nations. Reports indicate that Turkey has utilized facial recognition software to monitor and identify individuals during protests (Radu, 2019; Bozkurt, 2021). 

Holistically, the global expansion of China’s urban surveillance model sparks significant concerns, particularly in relation to its potential to increase authoritarian practices in adopting countries. In the absence of robust counter mechanisms, the adoption of Chinese surveillance model by authoritarian states is only likely to augment. 

Strategic Digital Information Operations (SDIOs)

Another interesting aspect of authoritarian regimes is the use of digital technologies in creating and spreading pro-regime propaganda and conspiracy narratives that benefit the regimes. This is happening extensively in the region, including Turkey, as a part of the manipulation of the people in order to impose control on them and silence the opposition. The pro-regime propaganda machine uses conspiracy theories with a dual strategy, defensive and offensive, to shape the public perception of the regime. Defensively, it seeks to portray the regime as a legitimate national authority, emphasising its adherence to the nation’s interests and well-being in a way that no legitimate alternative is imaginable. In these narratives, leaders are portrayed as heroic figures with exceptional qualities, and the system is presented as flawless and well-suited to the country’s needs. On the offensive front, the propaganda machine works to discredit any alternative to the current regime. Opposition figures are either assassinated, arrested or labelled as traitors, criminals, or foreign agents so they can be eliminated politically. To reach to this end, conspiracy theories link opposition figures to nefarious plots or foreign intervention, thus undermining the credibility of opposition narratives. 

In recent years, propaganda and conspiracy theories have played a significant role in Turkey’s political landscape, influencing political narratives and public opinion. The Turkish government, particularly under President Erdoğan and his ruling party (AKP), has been known for using state-controlled or pro-government media to push certain narratives. The government’s media strategy includes promoting nationalistic themes, highlighting Turkey’s achievements under AKP rule, and portraying the government as the protector of national interests against both internal and external threats. The government often emphasizes Turkey’s sovereignty and positions itself against perceived Western interference, such as criticisms from the European Union or the United States. By doing so, it strengthens a nationalist image, resonating with citizens who view Turkey as being unfairly targeted by foreign powers. Propaganda often incorporates Islamic and conservative values to appeal to the AKP’s core voter base. Erdoğan’s speeches and media outlets supportive of the government emphasize the defense of Islamic culture and values, framing the AKP as a protector of both religion and national identity. Government narratives frequently depict opposition groups as threats to national stability. This includes not only political rivals but also groups like the Kurdish population, the Gülen movement (which is accused by Erdogan regime of being behind the 2016 coup attempt), and the pro-Kurdish HDP party, who are often associated with terrorism or disloyalty. 

Additionally, conspiracy theories have been pervasive in Turkish political culture, often used to explain domestic unrest or justify political decisions. Here, pro-government media often propagate conspiracies about the opposition, portraying them as aligned with foreign powers or terrorist organizations. A persistent theme in Turkish political discourse is the idea that foreign powers or global financial institutions are working to undermine Turkey’s economy and political stability. Moreover, the failed coup attempt in July 2016 became a fertile ground for conspiracy theories. While the Turkish government attributed the coup attempt to Fethullah Gülen, a cleric who lived in exile in the United States for decades until his death, alternative theories continue to circulate. Some claim that foreign powers, particularly the US, were involved in the coup plot, while others suggest that elements within the Turkish government may have allowed the coup to proceed as a means to justify a subsequent crackdown on opposition. In the same vein, many conspiracy theories center around the idea that Western powers, particularly the US and Europe, are conspiring against Turkey to prevent it from becoming a major regional power. These theories often cite Turkey’s geopolitical location, its military interventions in the region, or its aspirations to become an independent economic powerhouse.

A significant portion of the mainstream media in Turkey is either directly controlled by the government or aligned with it. These outlets often echo government narratives, downplaying criticisms, and emphasizing government achievements or conspiracy-laden stories about opposition and foreign interference. Despite the dominance of pro-government media, social media platforms have become spaces for both opposition voices and pro-government voices. The government has sought to control these platforms through legal means, introducing laws to regulate social media and threatening to block access to platforms that do not comply with government requests to remove content.

Mechanisms of Diffusion

We observed that the diffusion of digital authoritarianism occurs in three main mechanisms: learning, emulation and cooperative interdependence.

Learning

It has been widely argued that countries across the globe learned from domestic and foreign experience to adopt various forms of digital authoritarian practices. This is more prominent in countries experiencing public unrest, like Turkey and Egypt. For example, they both have learned lessons from the Gezi Park and Tahrir Square protests, respectively. Despite many indications to this effect, for a long time there was a lack of smoking gun evidence pointing at this type of learning. In 2016, a series of leaked emails from Erdogan’s son-in-law and then Energy Minister Berat Albayrak’s account revealed that in the aftermath of the Gezi Park protests, the Erdogan regime identified its lack of control of digital space as a problem and sought solutions in the form of “set[ting] up a team of professional graphic designers, coders, and former army officials who had received training in psychological warfare” (Akis, 2022). In later years, the regime built one of the world’s most extensive internet surveillance networks on social media, particularly on X, according to Norton Symantec.

In regard to external learning, China (and Chinese companies) and Western private companies have been at the forefront of actors promoting internet censorship practices. China has been not only a major promoter but also a source of learning for middle powers when it comes to internet surveillance, data fusion, and AI. The Shanghai Cooperation Organization (SCO) has become a key vehicle that drives these efforts. For example, during the 2021 SCO summit, Chinese officials led a panel titled the Thousand Cities Strategic Algorithms, which trained the international audience that included many developing country representations on developing a “national data brain” that integrates various forms of financial and personal data and uses artificial intelligence to analyze it. The SCO website reported that 50 countries are engaged in discussions with the Thousand Cities Strategic Algorithms initiative (Ryan-Mosley, 2022). China has also been active in providing media and government training programs to representatives from BRI-affiliated countries. In one prominent example, Chinese Ministry of Public Security instructed Meiya Pico, a Chinese cybersecurity company, to train government representatives from Turkey, Pakistan, Egypt, and other countries on digital forensics (see Weber, 2019: 9-11). 

Moreover, the spread of internet censorship and surveillance technologies points to a highly probable learning event facilitated by western corporate entities. Specifically, Sandvine, NSO Group, and their parent company Francisco Partners, emerged at the center of the diffusion of DPI technology in most Middle Eastern countries except for Iran where the company is not allowed to operate. Recent revelations show that the company has played a significant role in facilitating the spread of ideas between countries. Alexander Haväng, the ex-Chief Technical Officer of Sandvine, explained in an internal newsletter addressed to the company’s employees that their technology can appeal to governments whose surveillance capacities are hampered by encryption. Haväng wrote that Sandvine’s equipment could “show who’s talking to who, for how long, and we can try to discover online anonymous identities who’ve uploaded incriminating content online” (Gallagher, 2022). 

The spread of DPI practices in general and Sandvine’s technology in particular is also evidenced by the chronology of acquisition by developing countries. The list of countries contracted to buy Sandvine’s DPI technology includes Turkey, Algeria, Afghanistan, Azerbaijan, Egypt, Eritrea, Jordan, Kuwait, Pakistan, Qatar, Russia, Sudan, Thailand, the United Arab Emirates, and Uzbekistan (Gallagher, 2022). There is a clear trend here, both in terms of regime susceptibility and chronology of adoption. Turkey purchased Sandvine’s DPI technology in 2014, Egypt in 2016, and Pakistan did so in 2018 (Malsin, 2018; Ali & Jahangir, 2019). 

It is highly likely that later adopters of this technology reviewed its performance in early adopters and decided upon their own adoption. We know from previous research that private companies can “influence the spread of state policies by encouraging the exchange of substantive and procedural information between states” (Garrett & Jansa, 2015: 391). Governments are required to understand details about the content of a technology and relevant institutional mechanisms to use it effectively. Corporations facilitate communication about these details. The existence of extensive links between Sandvine and authoritarian regimes, the similarities of how the tech has been used, and the sheer prominence of this company and its technology demonstrate a plausible argument for diffusion.

Using practice framework, we focus on ‘configurations of actors’ who are involved in enabling authoritarianism (Glasius & Michaelsen, 2018). In most instances, these actors are not states, but private companies (see Table 2). Moreover, contrary to perceived active role of Chinese companies, with the prime exception of Iran, it was Western tech companies that provided most of the high-tech surveillance and censorship capabilities to authoritarian regimes in the Muslim world including Turkey. These included, inter alia, US-Canadian company Sandvine, Israeli NSO Group, German FinFisher and Finland’s Nokia Networks. 

Emulation

There’s evidence that authoritarian countries in the region like Turkey have emulated major powers, as well as each other, when it comes to internet censorship practices. Among other things, homophily of actors played important role as actors prefer to emulate models from reference groups of actors with whom they share similar cultural or social attributes (Elkins & Simmons, 2005). Political alignment and proximity among nations foster communication and the exchange of information (Rogers, 2010). We observe the influence of this dynamic between China and Russia, and political regimes in the Muslim world who are susceptible to authoritarian forms of governance to varying degrees.

Research noted that states tend to harmonize their policy approaches to align with the prevailing norms of the contemporary global community, irrespective of whether these specific policies or institutional frameworks align with local conditions or provide effective solutions. Notably, since most transfers originate from the core to the periphery, policy transfers to developing regions might be ill-suited and consequently ineffective. There’s evidence that adoption of city surveillance is driven by the desire for conformity rather than the search for effective solutions. China’s CCTV-smart city solutions are considered in the region to be “bold innovations” as they’ve gathered disproportionate attention from the developing countries across the world. However, there’s evidence that the countries adopt this technology because of their apparent promise rather than demonstrated success.  For example, there has been a controversy about whether Huawei’s safe city infrastructure actually helps to reduce urban crime. In a dubious presentation in 2019, Huawei claimed that its safe city systems have been highly effective in reducing crime, increasing the case clearance rate, reducing emergency response time, and increasing citizen satisfaction. However, research by CSIS revealed that these numbers have been grossly exaggerated if not completely fabricated (Hillman &  McCalpin, 2019).

Emulation and learning appear to be the major mechanisms through which such practices spread. First, by demonstrating the effectiveness of disinformation campaigns and propaganda – such as Russian interference in US presidential elections in 2016 and China’s propaganda around the Covid-19 pandemic – these countries have shown other regimes that similar tactics can be used to control their own populations and advance their interests (Jones, 2022). Second, China and Russia have acted as important sources of learning for authoritarian regimes. China has hosted thousands of foreign officials and members of media from BRI countries in various training programs on media and information management since 2017 (Freedom House, 2022). For example, in 2017, China’s Cyberspace Administration held cyberspace management seminars for officials from BRI countries. Chinese data-mining company iiMedia presented its media management platform which is advertised as offering comprehensive control of public opinion, including providing early-warnings for “negative” public opinions and helping guide the promotion of “positive energy” online (Laskai, 2019). 

The governments in the Muslim world learned how to use the social media and other digital technologies for ‘flooding,’ which helps strengthen and legitimize their political regime. This is a part of a broader objective of shaping the information environment domestically and internationally (Mir et al., 2022). At home, these governments are attempting to mold their citizens’ conduct online. They hired social media consultants and influencers to do their propaganda. They learned how to flood the information space with propaganda narratives using troll farms and bots. For example, in Turkey, the AKP government created a massive troll army in response to the Gezi Protests in 2013. A 2016 study published by the cyber security company Norton Symantec shows that among countries in Europe, the Middle East and Africa, Turkey is the country with the most bot accounts on Twitter (Akis, 2022). In 2020, Twitter announced that it was suspending 7,340 fake accounts that had shared over 37 million tweets from its platform. Twitter attributed the network of accounts to the youth wing of the ruling AKP. 

Through the aforementioned techniques, Turkey moved beyond strategies of “negative control” of the internet, in which the government attempt to block, censor, and suppress the flow of communication, and toward strategies of proactive co-optation in which social media serves regime objective. The opposite of internet freedom, therefore, is not necessarily internet censorship but a deceptive blend of control, co-option, and manipulation. As the public debate is seeded with such disinformation, this makes it hard for the governments’ opponents to convince their supporters and mobilize (Gunitsky, 2020).

Here, the practices appear to be a mixed bag of diffusion, convergence and even innovation on the part of some regional countries. There is some proof of learning on the part of the Turkish regime: Berat Albayrak’s emails reveal the government’s learning from the Gezi protests and intentional establishment of their own troll farms (Akis, 2022). Similarly, the Sisi regime learnt from the Arab Spring protests as well. While it is hard to find a smoking gun evidence of these regimes copying Russian or Chinese playbook, extensive links between some of these countries (such as Pakistan and Turkey), as well as between some of these countries and Russia/China (Turkey and Russia; China and Pakistan/Iran) brings some evidence of diffusion.

Cooperative Interdependence

Nested dolls depicting authoritarian and populist leaders Vladimir Putin, Donald Trump, and Recep Tayyip Erdogan displayed among souvenirs in Moscow on July 7, 2018. Photo: Shutterstock.

We have observed that a cooperative interdependence has been at play when it comes to the diffusion of internet censorship practices from China to developing countries. Countries like Turkey are facing serious economic challenges and are in dire need of foreign direct investment. When tracing China’s technology transfer in these countries, a common thread emerges that tie most of the Chinese engagement to various forms of aid, trade negotiations, or grants. Prominently, China uses its Digital Silk Road (DSR) concept under the banner of the Belt and Road Initiative (BRI) to push for adoption of its technological infrastructure and accompanying policies of surveillance and censorship in digital and urban environments (Hillman, 2021). For example, at the 2017 World Internet Conference in China, representatives from Turkey, Egypt, Saudi Arabia, and the UAE signed a “Proposal for International Cooperation on the ‘One Belt, One Road’ Digital Economy,” an agreement to construct the DSR to improve digital connectivity and e-commerce cooperation (Laskai, 2019). The core components of the DSR initiative are smart (or “safe cities”), internet infrastructure, and mobile networks.

We do not argue that China is “forcing” these countries to adopt internet censorship practices. Rather, a cooperative interdependence works through changing incentive structures of BRI-connected states where financial incentives by China, coupled with technology transfer, promote China’s practical approach to managing the cyberspace as well. Indeed, BRI’s digital dimensions include many projects such as 5G networks, smart city projects, fiber optic cables, data centers, satellites, and devices that connect to these systems. In addition to having commercial value in terms of expanding China’s business of information technology, these far-reaching technologies have strategic benefit as they help the country achieve geoeconomic and geopolitical objectives that involve promotion of digital authoritarian practices and Chinese model of internet governance (Malena, 2021; Tang, 2020). 

For example, Huawei’s growing influence in Turkey, and other regional countries such as Iran, Egypt, Pakistan, and particularly in the context of building their 5G infrastructure, is tied to these countries’ involvement in DSR projects. As mentioned above, all the abovementioned countries have signed agreements to cooperate with Huawei to build their 5G infrastructure. The latter is not merely an advanced technology, but also a vehicle of promoting an entire legal and institutional infrastructure for China. In 2017 the Standardization Administration of China (SAC) released the “BRI Connectivity and Standards Action Plan 2018-2020” which aims at promoting Chinese technical standards and improving related policies among BRI-recipient states across technologies including AI, 5G, and satellite navigation systems (Malena, 2021).

Cooperative interdependence such as loans, commercial diplomacy and other state initiatives are prominent mechanisms through which China spreads its urban surveillance practices. The Table 2 also demonstrates this process.

In the Muslim world, countries converged on importing China’s smart city platforms in recent years. A close collaboration between Chinese technology companies and authoritarian governments has led to the development of smart city infrastructures in multiple urban settings. Several Chinese companies have been at the forefront of this endeavor: Huawei, Hikvision, ZTE Corporation, Alibaba, Dahua Technology, and Tiandy (Yan, 2019). Huawei is a key source of diffusion of urban surveillance practices.

Huawei has established partnerships with major Turkish telecom companies, Turkcell and Vodafone TR, to implement smart city technologies in Samsun and Istanbul, respectively (KOTRA, 2021). Additionally, Turkey hosts one of Huawei’s 19 global Research and Development centers. In 2020, Turkcell became the first telecom operator outside China to adopt Huawei’s mobile app infrastructure, a system developed by Huawei in response to US sanctions that limited the use of certain Google software on Huawei devices. In 2022, Turk Telekom signed a contract with Huawei to build Turkey’s complete 5G network (Hurriyet, 2022). This infrastructure, known as Huawei Mobile Services (HMS), encompasses a suite of applications, cloud services, and an app store, which Huawei describes as “a collection of apps, services, device integrations, and cloud capabilities supporting its ecosystem” (Huawei, 2022).

Countries have also emulated China as the role model when it comes to urban surveillance practices. Indeed, China’s influence was highly discernible in the area of urban surveillance, where it has emerged as a role model and a key provider of high-tech tools (Germanò et al., 2023). To begin with, there are extensive linkages between sender (mostly China) and adopter countries in political and economic areas. These include the growing presence of China in regional economies, participation in China-dominated organizations such as the Shanghai Cooperation Organization (SCO), and cooperation with China on internet governance issues such as the statement in the UN by several countries. Moreover, China has long acted as a laboratory to observe the results of its unique blend of high-tech authoritarianism that combined extensive urban surveillance with control of the internet under the pretext of national security and sovereignty (see Mueller, 2020). The perceived success of Chinese officials in curbing crime, ensuring stability and efficient management of urban settings, including their draconian measures to control the spread of COVID-19, have elevated China as a role model to be emulated by many authoritarian countries, including those in the Muslim world (Barker, 2021).

The table below demonstrates China’s role in the diffusion of digital authoritarianism in the region including Turkey:

Conclusion

This research illustrates how Turkey’s adoption of digital authoritarian practices—encompassing restrictive legal frameworks, internet censorship, urban surveillance, and strategic digital information operations—has been propelled by a combination of learning from domestic unrest, emulating paradigms set by major authoritarian players like China and Russia, and capitalizing on cooperative interdependence forged through economic and strategic partnerships. Despite Turkey’s NATO membership and other Western affiliations, the government has selectively borrowed from authoritarian models, integrating advanced surveillance technologies and normative frameworks that restrict civic freedoms in the digital realm. In this ecosystem, private Western companies, operating with limited oversight, have facilitated the supply of censorship and surveillance tools, challenging conventional expectations that illiberal digital governance is primarily state-driven.

These findings highlight the urgent need to establish robust international cyber norms and regulations that delineate clear boundaries on digital governance, particularly in states with deep ties to the West. Multilateral fora, including the United Nations and the Council of Europe, can take the lead by defining the scope of “digital authoritarianism,” instituting transparent guidelines on surveillance exports, and ensuring that technology providers are held accountable for the potential misuse of their products. Greater emphasis on privacy protections and digital rights is equally critical, calling for comprehensive legislation within Turkey that shields citizens from unwarranted data collection. Support from the international community—through funding, awareness campaigns, and legal assistance—can empower local civil society groups to advocate for these rights, educate citizens on online privacy, and hold authorities to account.

A second imperative is responsible corporate behavior, where companies must be compelled—via legal and reputational mechanisms—to adhere to human rights standards and disclose how their technologies are deployed in countries like Turkey. Establishing an independent monitoring entity to track repressive digital practices, publicize violations, and elevate them to international organizations can reinforce such accountability. Equally important, regional and global cooperation on digital freedom can help counter Turkey’s authoritarian trajectory; governments committed to open societies should launch joint initiatives aimed at improving cybersecurity, combating disinformation, and expanding transparent governance models that respect human rights. Technical assistance and knowledge-sharing will be particularly valuable where Turkey’s domestic institutions seek alternatives to purely repressive tools.

Moreover, economic incentives can be used strategically to steer Turkey away from partnerships that reinforce authoritarian tendencies. By prioritizing trade relationships and development aid tied to ethical technology practices, major economic powers and international financial institutions can encourage Turkey to align more closely with suppliers committed to democratic values. Such an approach has the added benefit of opening the market to innovators developing privacy-enhancing products, thus providing viable alternatives to invasive surveillance systems. Finally, the use of strategic diplomatic channels remains a powerful lever. Dialogue within NATO, discussions at the European Union level, and broader diplomatic engagements allow Turkey’s partners to advocate for transparent, responsible digital practices. Joint resolutions or multilateral condemnations of authoritarian behaviors can further raise the political costs of continued repression.

Taken together, these initiatives underscore that countering digital authoritarianism in Turkey requires a proactive, holistic strategy. While local factors—such as domestic protest movements and longstanding elite interests—play a crucial role, the role of international actors and private corporations is equally significant. Each dimension, whether it be legal reform, corporate accountability, economic leverage, or diplomatic pressure, offers a piece of the puzzle. Coordinated action that weaves these elements into a cohesive approach is essential not only for Turkey but for the broader effort to preserve the open, rights-respecting nature of the global digital landscape. By challenging the unchecked diffusion of repressive technologies and policies, the international community can mitigate the risks posed by an ever-expanding authoritarian playbook and ensure that the internet remains a domain of freedom and democratic possibility.


 

Funding: This work was supported by Australian Research Council [Grant Number DP230100257]; Gerda Henkel Foundation [Grant Number AZ 01/TG/21]; Australian Research Council [Grant Number DP220100829].


 

Authors

Ihsan Yilmaz is Deputy Director (Research Development) of the Alfred Deakin Institute for Citizenship and Globalisation (ADI) at Deakin University, where he also serves as Chair in Islamic Studies and Research Professor of Political Science and International Relations. He previously held academic positions at the Universities of Oxford and London and has a strong track record of leading multi-site international research projects. His work at Deakin has been supported by major funding bodies, including the Australian Research Council (ARC), the Department of Veterans’ Affairs, the Victorian Government, and the Gerda Henkel Foundation.

(*) Ali Mamouri is a scholar and journalist specializing in political philosophy and theology. He is currently a Research Fellow at the Alfred Deakin Institute for Citizenship and Globalisation at Deakin University. With an academic background, Dr. Mamouri has held teaching positions at the University of Sydney, the University of Tehran, and Al-Mustansiriyah University, as well as other institutions in Iran and Iraq. He has also taught at the Qom and Najaf religious seminaries. From 2020 to 2022, he served as a Strategic Communications Advisor to the Iraqi Prime Minister, providing expertise on regional political dynamics. Dr. Mamouri also has an extensive career in journalism. From 2016 to 2023, he was the editor of Iraq Pulse at Al-Monitor, covering key political and religious developments in the Middle East. His work has been featured in BBC, ABC, The Conversation, Al-Monitor, and Al-Iraqia State Media, among other leading media platforms. As a respected policy analyst, his notable works include “The Dueling Ayatollahs: Khamenei, Sistani, and the Fight for the Soul of Shiite Islam” (Al-Monitor) and “Shia Leadership After Sistani” (Washington Institute). Beyond academia and journalism, Dr. Mamouri provides consultation to public and private organizations on Middle Eastern affairs. He has published several works in Arabic and Farsi, including a book on the political philosophy of Muhammad Baqir Al-Sadr and research on political Salafism. Additionally, he has contributed to The Great Islamic Encyclopedia and other major Islamic encyclopedias.

Nicholas Morieson is a Research Fellow at the Alfred Deakin Institute for Citizenship and Globalisation, Deakin University. He was previously a Lecturer at the Australian Catholic University in Melbourne. His research interests include populism, religious nationalism, civilizational politics, intergroup relations, and the intersection of religion and political identity.

(**) Muhammad Omer is a PhD student in political science at the Deakin University. His PhD is examining the causes, ideological foundations, and the discursive construction of multiple populisms in a single polity (Pakistan). His other research interests include transnational Islam, religious extremism, and vernacular security. He previously completed his bachelor’s in politics and history from the University of East Anglia, UK, and master’s in political science from the Vrije University Amsterdam. 


 

References

Access Now. (2022). “Internet Shutdowns in 2022.” Report. Access Nowhttps://www.accessnow.org/wp-content/uploads/2023/05/2022-KIO-Report-final.pdf.

Ahmed, Zahid Shahab; Yilmaz, Ihsan; Akbarzadeh, Shahram & Bashirov, Galib. (2023). “Digital Authoritarianism and Activism for Digital Rights in Pakistan.” European Center for Populism Studies (ECPS). July 20, 2023. https://doi.org/10.55271/rp0042

Akbarzadeh, Shahram, Amin Naeni, Ihsan Yilmaz, and Galib Bashirov. 2024. “Cyber Surveillance and Digital Authoritarianism in Iran.” Global Policy, March 14, 2024.
https://www.globalpolicyjournal.com/blog/14/03/2024/cyber-surveillance-and-digital-authoritarianism-iran.

Akbarzadeh, S., Mamouri, A., Bashirov, G., & Yilmaz, I. (2025). “Social media, conspiracy theories, and authoritarianism: between bread and geopolitics in Egypt.” Journal of Information Technology & Politics, 1–14. https://doi.org/10.1080/19331681.2025.2474000

Akis, Fazil. (2022). “Turkey’s Troll Networks.” Heinrich Broll Stiftunghttps://eu.boell.org/en/2022/03/21/turkeys-troll-networks.

Ali, Umer and Ramsha Jahangir. (2019). “Pakistan Moves to Install Nationwide ‘Web Monitoring System.” Coda Story.https://www.codastory.com/authoritarian-tech/surveillance/pakistan-nationwide-web-monitoring/.

Ambrosio, Thomas, and Jakob Tolstrup. (2019). “How Do We Tell Authoritarian Diffusion from Illusion? Exploring Methodological Issues of Qualitative Research on Authoritarian Diffusion.” Quality & Quantity 53(6): 2741-2763.

Ambrosio, Thomas. (2010). “Constructing a Framework of Authoritarian Diffusion: Concepts, Dynamics, and Future Research.” International Studies Perspectives 11(4): 375-392.

Anderson, Janna., & Rainie, Lee. (2020). “Many Tech Experts Say Digital Disruption Will Hurt Democracy.” Pew.https://www.pewresearch.org/internet/2020/02/21/many-tech-experts-say-digital-disruption-will-hurt-democracy/.

Aziz, Sahar. F., & Beydoun, Khalid. A. (2020). “Fear of black and brown internet: policing online activism. Boston University Law Review, 100(3), 1151-1192.

Bank, André, and Kurt Weyland, eds. (2020). Authoritarian Diffusion and Cooperation: Interests vs. IdeologyRoutledge.

Barker, Tyson. (2021). “Withstanding the Storm: The Digital Silk Road, Covid-19, and Europe’s Options.” China after Covid-19 Economic Revival and Challenges to the World, Institute for International Political Studies and Italian Ministry of Foreign Affairs and International Cooperation.

Bashirov, G., S. Akbarzadeh, I. Yilmaz, and Z. Ahmed. (2025). “Diffusion of Digital Authoritarian Practices in China’s Neighbourhood: The Cases of Iran and Pakistan,” Democratization, DOI: 10.1080/13510347.2025.2504588

Bendrath, Ralf, and Milton Mueller. (2011). “The End of the Net as We Know It? Deep Packet Inspection and Internet Governance.” New Media & Society 13(7): 1142-1160.

Bilginsoy, Zeynep. (2021). “Facebook Bows to Turkish Demand to Name Local Representative.” AP Newshttps://apnews.com/article/turkey-media-social-media-6f2b1567e0e7f02e983a98f9dc795265

Bostrom, Nick. (2014). Superintelligence: Paths, Dangers, Strategies. New York: Oxford University Press.

Bozkurt, Abdullah. (2021). “Turkey uses facial recognition to spy on millions, secretly investigates unsuspecting citizens.” Nordic Monitor. September 20, 2021. https://nordicmonitor.com/2021/09/turkey-uses-facial-recognition-to-spy-on-millions-secretly-investigates-unsuspecting-citizens/  

Breuer, Anita. (2012). “The Role of Social Media in Mobilizing Political Protest: Evidence from the Tunisian Revolution.” German Development Institute Discussion Paper 10: 1860-0441.

Cattle, Amy E. (2015). “Digital Tahrir Square: An Analysis of Human Rights and the Internet Examined through the Lens of the Egyptian Arab Spring.” Duke J. Comp. & Int’l L. 26: 417.

Damnjanović, Ivana. (2015). Polity without Politics? Artificial Intelligence versus Democracy: Lessons from Neal Asher’s Polity Universe. Bulletin of Science, Technology & Society35(3-4), 76-83.

Danao, Monique, and Sophie Venz. (2023). “Are VPNs Legal? The Worldwide Guide.” Forbes.https://www.forbes.com/advisor/au/business/software/are-vpns-legal/

Demirhan, Kamil. (2014). “Social Media Effects on the Gezi Park Movement in Turkey: Politics Under Hashtags.” InPătruţ B., Pătruţ M. (eds) Social Media in Politics. Public Administration and Information Technology, New York: Springer

Diamond, Larry, and Marc F. Plattner, eds. (2012). Liberation Technology: Social Media and the Struggle for Democracy. JHU Press.

Durac, Vincent, and Francesco Cavatorta. (2022). Politics and Governance in the Middle East. Bloomsbury Publishing.

ECCHR (European Center for Constitutional and Human Rights). (2023). https://www.ecchr.eu/en/

Feldstein, Steven. (2019). The Global Expansion of AI Surveillance. Washington, DC: Carnegie Endowment for International Peace. https://carnegieendowment.org/2019/09/17/global-expansion-of-ai-surveillance-pub-79847accessed: 20/2/2021

Feldstein, Steven. (2021). The Rise of Digital Repression: How Technology Is Reshaping Power, Politics, and Resistance. Oxford University Press.

Freedom House (2021), Freedom on the Net 2020 Report, https://freedomhouse.org/sites/default/files/2020-10/10122020_FOTN2020_Complete_Report_FINAL.pdf accessed: 1/3/2021

Freedom House. (2022). “Freedom in the World.” https://freedomhouse.org/report/freedom-world

Freedom House. (2023). “Freedom on the Net 2023.” https://freedomhouse.org/sites/default/files/2023-10/Freedom-on-the-net-2023-DigitalBooklet.pdf

Gallagher, Ryan. (2022). “Sandvine Pulls Back From Russia as US, EU Tighten Control on Technology It Sells.” Bloomberghttps://www.bloomberg.com/news/articles/2022-06-03/sandvine-pulls-back-from-russia-as-us-eu-tighten-control-on-technology-it-sells

Gardels, Nathan., & Berggruen, Nicolas. (2019). Renovating Democracy: Governing in the Age of Globalization and Digital Capitalism. Berkeley: University of California Press.

Garrett, Kristin N., and Joshua M. Jansa. (2015). “Interest Group Influence in Policy Diffusion Networks.” State Politics & Policy Quarterly. 15(3): 387-417.

Geere, Duncan. (2012). “How Deep Packet Inspection Works.” Wiredhttps://www.wired.co.uk/article/how-deep-packet-inspection-works

Germanò, Marco André, Ava Liu, Jacob Skebba, and Bulelani Jili. (2023). “Digital Surveillance Trends and Chinese Influence in Light of the COVID-19 Pandemic.” Asian Journal of Comparative Law, 1-25.

Gheytanchi, Elham. (2016). “Iran’s Green Movement, social media, and the exposure of human rights violations.” In: M. Monshipouri (Ed.), Information Politics, Protests, and Human Rights in the Digital Age (pp. 177-195). Cambridge: Cambridge University Press. 

Gilardi, Fabrizio. (2010). “Who learns from what in policy diffusion processes?” American Journal of Political Science.54(3): 650-666.

Gilardi, Fabrizio. (2012). “Transnational Diffusion: Norms, Ideas, and Policies.” Handbook of International Relations 2: 453-477.

Gunitsky, Seva. (2020) “The Great Online Convergence: Digital Authoritarianism Comes to Democracies .” War on the Rocks. February 18, 2020. https://warontherocks.com/2020/02/the-great-online-convergence-digital-authoritarianism-comes-to-democracies/.

Helbing, Dirk., et al. (2019). “Will Democracy Survive Big Data and Artificial Intelligence?” Scientific American.https://www.scientificamerican.com/article/will-democracy-survive-big-data-and-artificial-intelligence/

Hillman, Jonathan E. (2021). The Digital Silk Road: China’s Quest to Wire the World and Win the Future. Profile Books. 

Hillman, Jonathan E., and Maesea McCalpin. (2019). “Watching Huawei’s” safe cities.” Center for Strategic and International Studies (CSIS).

Hong, Caylee. (2022). “‘Safe Cities’ in Pakistan: Knowledge Infrastructures, Urban Planning, and the Security State.” Antipode 54(5): 1476-1496.

Human Rights Watch. (2013). “China: Draconian Legal Interpretation Threatens Online Freedom.” https://www.hrw.org/news/2013/09/13/china-draconian-legal-interpretation-threatens-online-freedom

Human Rights Watch. (2022). “Turkey: Dangerous, Dystopian New Legal Amendments.” https://www.hrw.org/news/2022/10/14/turkey-dangerous-dystopian-new-legal-amendments

IHD. (2017). “Human Rights Violations of Turkey in 2016: De Facto Authoritarian Presidential System.” https://ihd.org.tr/en/2016-human-rights-violations-of-turkey-in-figures/

Iosifidis, Petros, and Mark Wheeler. (2015). “The Public Sphere and Network Democracy: Social Movements and Political Change?.” Global Media Journal 13(25): 1-17.

Jenzen, Olu., et al. (2021). The symbol of social media in contemporary protest: Twitter and the Gezi Park movement. Convergence27(2), 414-437.

Korea Trade-Investment Promotion Agency (KOTRA). (2021). “Insights into Smart City Market in Turkey.” https://www.novusens.com/s/2462/i/KOTRA_Report_V33_ToC_fixed_after_Event.pdf

Kynge, James, Valerie Hopkins, Helen Warrell, and Kathrin Hille. (2021). “Exporting Chinese Surveillance: the Security Risks of ‘Smart Cities’.” Financial Times. https://www.ft.com/content/76fdac7c-7076-47a4-bcb0-7e75af0aadab

Laskai, Lorand. (2019). How China Is Supplying Surveillance Technology and Training Around the World. Privacy International.

Lynch, Marc. (2011). “After Egypt: The Limits and Promise of Online Challenges to the Authoritarian Arab State.” Perspectives on Politics. 9(2): 301-310.

Malena, Jorge. (2021). “The extension of the digital silk road to Latin America: Advantages and potential risks.” Brazilian Center for International Relations.

Malsin, Jared. (2018). “Throughout Middle East, the Web Is Being Walled Off.” Wall Street Journal.https://www.wsj.com/articles/throughout-middle-east-the-web-is-being-walled-off-1531915200

Marczak, Bill, Jakub Dalek, Sarah McKune, Adam Senft, John Scott-Railton, and Ron Deibert. 2018. “Bad Traffic: Sandvine’s PacketLogic Devices Used to Deploy Government Spyware in Turkey and Redirect Egyptian Users to Affiliate Ads?” The Citizen Labhttps://citizenlab.ca/2018/03/bad-traffic-sandvines-packetlogic-devices-deploy-government-spyware-turkey-syria/

Margetts, H. (2013). “The Internet and Democracy.” In: The Oxford Handbook of Internet Studies. Edited by W. H. Dutton, New York: Oxford University Press.

Michaelsen, Marcus. (2018). “Transforming Threats to Power: The International Politics of Authoritarian Internet Control in Iran.” International Journal of Communication. 12: 3856-3876.

Mir, Asfandyar, Tamar Mitts and Paul Staniland. (2022). “Political Coalitions and Social Media: Evidence from Pakistan.” Perspectives on Politics, 1-20.

Mueller, Milton L. (2020). “Against Sovereignty in Cyberspace.” International Studies Review 22(4): 779-801.

Negroponte, Nicholas. (1996). Being digital. New York: Vintage Books.

Our World in Data. (2021). “Fixed telephone subscriptions, 1960 to 2017.” https://ourworldindata.org/grapher/fixed-telephone-subscriptions-per-100-people?tab=chart&country=IRN~TUR~PAK~EGY

Ozturan, Gurkan. (2023). “Freedom on the Net 2023 Turkey Country Report.” Freedom House.https://www.academia.edu/108543121/Freedom_on_the_Net_2023_Turkey_Country_Report_Freedom_House

Polyakova, Alina., & Meserole, Chris. (2019). “Exporting digital authoritarianism: The Russian and Chinese models.” Policy Brief, Democracy and Disorder Series (Washington, DC: Brookings, 2019), 1-22.

Privacy International. (2019). “State of Privacy Egypt.” https://privacyinternational.org/state-privacy/1001/state-privacy-egypt.

Radavoi, Ciprian. N. (2019). “The Impact of Artificial Intelligence on Freedom, Rationality, Rule of Law and Democracy: Should We Not Be Debating It?” Texas Journal on Civil Liberties & Civil Rights25, 107.

Radu, Sintia. (2019). “How China and Russia Spread Surveillance.” U.S. News & World Report. September 20, 2019. https://www.usnews.com/news/best-countries/articles/2019-09-20/china-russia-spreading-surveillance-methods-around-the-world

Ruijgrok, Kris. (2017). “From the Web to the Streets: Internet and Protests Under Authoritarian Regimes.” Democratization. 24(3): 498-520.

Ryan-Mosley, Tate. (2022). “The world is moving closer to a new cold war fought with authoritarian tech.” MIT Technology Reviewhttps://www.technologyreview.com/2022/09/22/1059823/cold-war-authoritarian-tech-china-iran-sco/?truid=%2A%7CLINKID%7C%2A

Sharman, Jason C. (2008). “Power and Discourse in Policy Diffusion: Anti-money Laundering in Developing States.” International Studies Quarterly. 52 (3): 635-656.

Stepan, Alfred, eds. (2018). Democratic transition in the Muslim world: a global perspective (Vol. 35). Columbia University Press.

Stone, Peter., et al. (2016). One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel. Stanford University, Stanford, CAhttp://ai100.stanford.edu/2016-report

Strang, David, and Sarah A. Soule. (1998). “Diffusion in Organizations and Social Movements: From Hybrid Corn to Poison Pills.” Annual Review of Sociology. 24(1): 265-290.

Sunstein, C. R. (2009). Republic.com 2.0. Princeton: Princeton University Press.

Tan, Netina. (2020). “Digital Learning and Extending Electoral Authoritarianism in Singapore.” Democratization27(6): 1073-1091. 

Tang, Min. (2020). “Huawei Versus the United States? The Geopolitics of Exterritorial Internet Infrastructure.” International Journal of Communication14, 22.

Triolo, Paul. (2020). “The Digital Silk Road: Expanding China’s Digital Footprint.” Eurasia Group.https://www.eurasiagroup.net/files/upload/Digital-Silk-Road-Expanding-China-Digital-Footprint.pdf.

Tusa, Felix. (2013). “How Social Media Can Shape a Protest Movement: The Cases of Egypt in 2011 and Iran in 2009.” Arab Media and Society17, 1-19.

Weber, Valentin. (2019). “The Worldwide Web of Chinese and Russian Information Controls.” Center for Technology and Global Affairs, University of Oxford.

Weber, Valentin. (2021). “The diffusion of cyber norms: technospheres, sovereignty, and power.” PhD diss., University of Oxford.

Wheeler, Deborah. (2017). Digital Resistance in the Middle East: New Media Activism in Everyday Life. Edinburgh: Edinburgh University Press. 

Yan, Yau Tsz. 2019. “Smart Cities or Surveillance? Huawei in Central Asia.” The Diplomathttps://thediplomat.com/2019/08/smart-cities-or-surveillance-huawei-in-central-asia.

Yenigun, Halil Ibrahim. (2021). “Turkey as a Model of Muslim Authoritarianism?” In: Routledge Handbook of Illiberalism (840-857). Routledge.

Yilmaz, I., Caman, M. E., & Bashirov, G. (2020). “How an Islamist party managed to legitimate its authoritarianization in the eyes of the secularist opposition: the case of Turkey.” Democratization27(2), 265–282. https://doi.org/10.1080/13510347.2019.1679772

Yilmaz, I. (2021). Creating the Desired Citizen: Ideology, State and Islam in Turkey. Cambridge University Press.

Yilmaz, I. (2025). Intergroup emotions and competitive victimhoods: Turkey’s ethnic, religious and political emigrant groups in Australia. Palgrave Macmillan Singapore. 

Yilmaz, I. & Shipoli, E. (2022). “Use of past collective traumas, fear and conspiracy theories for securitization of the opposition and authoritarianisation: the Turkish case.” Democratization. 29(2), 320-336.

Yilmaz, I., Shipoli, E., & Demir, M. (2023). Securitization and authoritarianism: The AKP’s oppression of dissident groups in Turkey. Palgrave Macmillan Singapore. 

Yilmaz, Ihsan; Akbarzadeh, Shahram & Bashirov, Galib. (2023). “Strategic Digital Information Operations (SDIOs).” Populism & Politics (P&P). European Center for Populism Studies (ECPS). September 10, 2023. https://doi.org/10.55271/pp0024

Yilmaz, Ihsan; Akbarzadeh, Shahram & Bashirov, Galib. (2023). “Strategic Digital Information Operations (SDIOs).” Populism & Politics (P&P). European Center for Populism Studies (ECPS). September 10, 2023. https://doi.org/10.55271/pp0024a

Yilmaz, I., Akbarzadeh, S., Abbasov, N., & Bashirov, G. (2024). “The Double-Edged Sword: Political Engagement on Social Media and Its Impact on Democracy Support in Authoritarian Regimes.” Political Research Quarterly, 0(0). https://doi.org/10.1177/10659129241305035

Yilmaz, I. and K. Shakil. 2025. Reception of Soft and Sharp Powers: Turkey’s Civilisationist Populist TV Dramas in Pakistan. Singapore: Palgrave Macmillan.

Yilmaz, I., Morieson, N., & Shakil, K. (2025). “Authoritarian diffusion and sharp power through TV dramas: resonance of Turkey’s ‘Resurrection: Ertuğrul’ in Pakistan.” Contemporary Politics, 1–21. https://doi.org/10.1080/13569775.2024.2447138

Ziccardi, Giovanni. (2012). Resistance, Liberation Technology and Human Rights in the Digital Age. Vol. 7. Springer Science & Business Media.

Photo: Dreamstime.

Authoritarian Diffusion in the Cyberspace: How Egypt Learns, Emulates, and Cooperates in Digital Authoritarianism

DOWNLOAD REPORT

Please cite as:
Yilmaz, Ihsan; Mamouri, Ali; Akbarzadeh, Shahram & Omer, Muhammad. (2025). “Authoritarian Diffusion in the Cyberspace: How Egypt Learns, Emulates, and Cooperates in Digital Authoritarianism.” European Center for Populism Studies (ECPS). May 9, 2025. https://doi.org/10.55271/rp0097



Egypt has emerged as a key adopter and regional diffuser of digital authoritarian practices. Once limited by weak digital infrastructure, the Sisi regime has transformed the country into a technologically repressive state through sweeping legal reforms, censorship mechanisms, and expansive surveillance networks. Drawing heavily from the models of China and Russia—particularly in urban monitoring and information control—Egypt actively emulates their approaches. Crucially, both Chinese and Western technology firms have facilitated this transformation, revealing a broader pattern of global complicity. This report demonstrates how Egypt’s trajectory illustrates the transnational diffusion of digital authoritarianism through mechanisms of learning, emulation, and interdependence—and offers a stark warning to democracies about the rising threat of state-enabled digital repression.

By Ihsan Yilmaz, Ali Mamouri*, Shahram Akbarzadeh**, Muhammad Omer***

Executive Summary

This report examines the rise and entrenchment of digital authoritarianism in Egypt, spotlighting how the regime systematically reclaims and militarizes the digital space to suppress dissent and erode democratic freedoms. Digital authoritarianism in Egypt spans four key domains: restrictive legal frameworks, internet censorship, urban surveillance, and strategic digital information operations (SDIOs).

Drawing on a wide array of sources—including academic literature, human rights reports, institutional data, and credible news coverage—the report demonstrates how the Egyptian government has aggressively expanded its control over digital life. This control includes deep surveillance tactics, the criminalization of online expression, and state-sponsored manipulation of digital discourse, all contributing to the shrinking of civic space and the violation of fundamental rights to privacy and free speech.

The regime employs advanced tools such as Deep Packet Inspection (DPI), widespread website blocking, and targeted internet shutdowns to neutralize opposition. These repressive tactics are reinforced by an expansive legal arsenal that frames digital expression as a threat to national security—penalizing dissent, limiting VPN use, and compelling tech companies to align with government mandates.

At the urban level, AI-driven CCTV networks and Smart City initiatives—often developed in partnership with Chinese and Western firms—create a pervasive surveillance infrastructure, enabling real-time monitoring of public behaviour. Meanwhile, through coordinated SDIO campaigns, the regime floods social media and state-aligned platforms with pro-government narratives, systematically silencing alternative viewpoints. These operations blend defensive strategies (legitimizing the regime and quelling criticism) with offensive disinformation that delegitimizes opposition groups.

The diffusion of these practices is not solely domestically engineered. Egypt’s digital authoritarian model is transnational in character, built through mechanisms of learning, emulation, and technological dependence. China has emerged as a central enabler, exporting both surveillance infrastructure and governance models. Yet, Western corporations—including Sandvine, NSO Group, FinFisher, and Nokia Networks—have also contributed significantly, supplying critical technologies that bolster Egypt’s repressive digital architecture, often with little regard for ethical implications.

Egypt’s model of digital control illustrates a dangerous global trend: the normalization and globalization of digital authoritarianism, where regimes exploit emerging technologies and international complicity to entrench power, silence dissent, and undermine democratic norms.

Recommendations

To effectively counter the growing threat of digital authoritarianism in Egypt and beyond, a comprehensive, multi-pronged strategy must be adopted. The following recommendations highlight key interventions to safeguard digital freedoms, enhance democratic resilience, and hold both states and corporations accountable:

1. Strengthen International Cyber Norms and Regulatory Frameworks: Establish binding international standards and protocols to govern the use of digital technologies by states. These norms must explicitly prohibit mass surveillance, politically motivated internet shutdowns, and the deployment of spyware against civilians. Multilateral organizations—such as the United Nations, the European Union, and regional bodies—must play a central role in enforcing these norms through treaties, sanctions, and export control regimes that restrict the transfer of surveillance technologies to authoritarian regimes.

2. Defend Digital Rights and Data Privacy at the National and Global Levels: Push for robust data protection legislation that empowers individuals and protects them from arbitrary state surveillance. Promote digital literacy campaigns and citizen awareness programs to strengthen public understanding of online rights and safety. Support grassroots civil society organizations, independent media, and digital rights defenders who expose abuses and advocate for open, secure, and rights-respecting digital environments.

3. Enforce Corporate Accountability and Ethical Tech Governance: Hold technology firms—both domestic and transnational—legally and morally accountable for their role in enabling repression. Establish international watchdog bodies to investigate, name-and-shame, and penalize companies complicit in human rights violations through the export or maintenance of surveillance technologies. Implement mandatory human rights impact assessments for all technology exports to high-risk regimes and enhance supply chain transparency in the tech sector.

4. Promote Strategic International Collaboration to Safeguard Digital Democracy: Strengthen multilateral coalitions of democracies to share intelligence, technological tools, and policy approaches for combating disinformation, propaganda, and transnational repression. Support cross-border investigations into Strategic Digital Information Operations (SDIOs) and develop joint early warning systems to detect digital repression tactics. Extend technical and legal support to countries resisting authoritarian encroachment into their digital spheres.

5. Leverage Economic Incentives to Deter Authoritarian Partnerships: Use trade agreements, investment flows, and development aid as tools to condition engagement with states on the basis of their digital human rights records. Encourage private and public institutions to divest from companies involved in digital repression and prioritize investment in technologies that strengthen democratic institutions, secure communications, and civil society networks.

6. Deploy Diplomatic and Legal Instruments to Challenge Repression: Utilize bilateral and multilateral diplomacy to pressure authoritarian regimes to reform their surveillance laws and practices. Sponsor UN resolutions, global forums, and high-level summits that spotlight digital repression and mobilize international consensus. Support international legal actions against regimes and actors who violate digital human rights, using forums such as the International Court of Justice (ICJ) and regional human rights courts.

7. Build Resilience Through Innovation and Empowerment: Invest in the development of privacy-preserving technologies, secure communication platforms, and censorship circumvention tools. Support the creation of local digital infrastructures that resist surveillance, especially in vulnerable democracies. Back innovation ecosystems that empower civic tech, independent media, and digital rights advocacy to thrive even under authoritarian pressure.

Addressing digital authoritarianism requires more than reactive measures—it demands proactive, coordinated, and sustained global action. The recommendations above provide a roadmap for governments, international institutions, civil society, and the private sector to reclaim the digital domain as a space of freedom, accountability, and democratic possibility.

Photo: Dreamstime.

Introduction

In recent years, scholars have increasingly focused on the diffusion of authoritarianism (Ambrosio, 2010; Bank, 2017), a process where authoritarian institutions, practices, policies, strategies, rhetorical frames, and norms spread from one regime to another (Ambrosio & Tolstrup, 2019). This phenomenon is particularly pronounced in the Middle East and Muslim World, where many countries exhibit authoritarian governance (Durac & Cavatorta, 2022; Yenigun, 2021; Stepan et al., 2018; Ahmed et al., 2023; Akbarzadeh et al., 2024; Yilmaz et al., 2024).

The advent of the internet and social media in the developing world in the late 2000s significantly empowered civil society and individual activists in these regions, creating an equalizing power between the state and society (Breuer, 2012; Ruijgrok, 2017). The extensive use of these technologies by protesters led many to consider them as “liberation technology,” facilitating anti-government movements across non-democratic countries (Diamond & Plattner, 2012; Ziccardi, 2012). 

Initially, authoritarian governments struggled to control the digital sphere due to a lack of technical expertise and digital infrastructure. They often resorted to internet shutdowns, as seen in Egypt during the Arab Spring 2011 protests (Cattle, 2015). However, as digital technologies evolved, so did the capabilities of authoritarian regimes. Therefore, despite the internet’s potential as a tool for liberation, its use by authoritarian regimes to disseminate propaganda, conduct surveillance, and control information has led to a new form of authoritarianism (Polyakova, 2019). 

This transformation is driven by advancements in artificial intelligence (AI), big data, and the widespread use of the internet, which have enabled unprecedented levels of surveillance and control. As Wael Ghonim, an Egyptian activist, has reminded us: “The Arab Spring revealed social media’s greatest potential, but it also exposed its greatest shortcomings. The same tool that united [people] to topple dictators eventually tore [us] apart through echo-chamber polarization, misinformation, toxic hate speech” (Gardels, 2019).

Such widespread adoption of digital control measures has led to the emergence of “digital authoritarianism” literature (Polyakova & Meserole, 2019; Dragu & Lupu, 2021; Khalil, 2020; Lilkov, 2020; Mare, 2020; Feldstein, 2019; Ahmed et al., 2023; Akbarzadeh et al., 2024; Yilmaz et al., 2024). This literature posits that as regimes leverage AI and other digital tools to monitor and control dissent, the need for policymakers and civil society organizations to counter these practices has become critical. The pessimism surrounding the potential of modern technology to undermine democracy is growing, with concerns about misinformation, data collection, surveillance, spread of conspiracy theories and propagation of authoritarian governance models (Radavoi, 2019; Stone et al., 2016; Bostrom, 2014; Helbing et al., 2019; Damnjanović, 2015; Yilmaz et al., 2025; Yilmaz & Shakil 2025). In a poll conducted by Pew, almost half of participants believed that the “use of [modern] technology will mostly weaken core aspects of democracy and democratic representation in the next decade” (Anderson, 2020).

Extant literature mainly focuses on countries such as China and Russia and their technology companies facilitating and promoting digital authoritarian practices (Khalil, 2020; Taylor, 2022; Zhang, Alon, & Lattemann, 2020). Moreover, the literature has treated policies, norms, and technological tools in a general manner as phenomena analysing authoritarian regimes’ use of tools like filtering and digital surveillance (Hellmeier, 2016; Xu, 2021) and examining policies governing the internet (Kerr, 2018). However, policies, norms, and technologies cannot be separated as they are usually interlinked among government entities, private companies, and international organizations across global networks (Dragu & Lupu, 2021). Therefore, as Adler and Pouliot (2011: 5) stated, practices are “patterned actions that are embedded in particular organized contexts,” this study chose a more holistic analysis, investigating norms, policies, and technologies employed by governments and non-state entities in an integrated manner. 

This report examines the digital authoritarian practices in Egypt (see Akbarzadeh et al., 2025) and the diffusion of these practices by investigating the norms, policies, and technologies employed by the Egyptian government. What we mean by diffusion is the process that Gilardi (2012: 454) describes as what “leads to the pattern of adoption, not the fact that at the end of the period, all (or many) countries have adopted the policy.” As such, diffusion refers to the use of digital technologies by authoritarian regimes to surveil, repress, and manipulate populations (Feldstein, 2021). Therefore, diffusion does not necessarily require an absolute convergence of practices; rather, an increase in policy similarity across countries generally follows diffusion processes (Gilardi, 2010; 2012), which we demonstrate here. Egypt, similar to other authoritarian regimes, utilize digital technology—often sourced from abroad, including from Western countries—such as the internet, social media, and artificial intelligence to maintain control and suppress dissent. 

We aim to understand how these practices spread and what can be done to counter them. Egypt, like other authoritarian regimes, have become adept at using sophisticated digital tools to monitor and control the internet rather than simply shutting it down. Technologies like DPI, “a type of data processing that looks in detail at the contents of the data being sent, and re-routes it accordingly” (Geere, 2012), allow for comprehensive network analysis and can be used for digital eavesdropping, internet censorship, and data theft (Bendrath & Mueller, 2011). This report will explore these dynamics in detail, providing a comprehensive analysis of the diffusion of digital authoritarianism in Egypt. 

Data Analysis of the Digital Space in Egypt

Egypt, with a total population of 116 million by mid-2024 and a USD476.7 billion GDP as of 2022 (Worldometer, 2024), is considered one of the most important countries in the Middle East and has a wide influence on the Arab world. It was among the first countries to witness the Arab Spring Movement and go through dramatic changes in the political system. The internet played a significant role in this period and also in the aftermath of the military’s cope in 2013. The table below shows the rise of internet usage in Egypt. 

The brief political openings in the late 2000s and the early 2010s were fuelled by the internet and social media’s empowerment of social mobilization and the authoritarian regimes’ inability to control the digital sphere as they lacked technical expertise and digital infrastructure to rein in on the internet (Cattle, 2015). However, as the use of the internet was on the rise in Egypt, the government’s efforts to control the digital space and impose more surveillance on people have been increasingly on the rise as well. Freedom House has reported a significant rise of government control on digital space in Egypt. The Freedom House Index shows that, on average, internet freedom has declined by about 40% in Egypt.

Freedom House’s World Index shows that Egypt has experienced declines in freedom of expression and belief, associational and organizational rights, the rule of law, and personal autonomy and individual rights (Freedom House, 2022). As a result, Egypt scored 26 on a scale of 0 (least free) to 100 (most free) in 2020, according to Freedom House (2021).

Tracing the pattern of practising digital authoritarianism in the world indicates that China and Russia play a significant role in leading this conduct, setting an effective example for authoritarian regimes in the Middle East, including Egypt, to follow the same pathway. The table in Figure 4 shows how Egypt followed the pathway of Chinese and Russian legislation in imposing digital authoritarianism.

The diffusion of digital authoritarian practice in Egypt is not limited to China. Many Western companies have contributed to providing the Egyptian government with sufficient technologies to impose control on digital space. The table in Figure 5 provides details about the source of technologies used in Egypt. 

Digital Authoritarian Strategies, Policies, and Practices

In this section, we explore a variety of strategies and policies the Egyptian government has adopted to impose a digital authoritarian regime in the country. The Egyptian government worked on four domains: restrictive legal frameworks, internet censorship, urban surveillance, and SDIOs. By leveraging these four domains, the Egyptian government has constructed a comprehensive system of digital authoritarianism. This system not only fortifies its grip on power but also serves as a blueprint for other authoritarian regimes seeking to exploit digital technologies to suppress dissent and maintain control.

Restrictive Legal Frameworks

Digital authoritarian regimes implement four main types of legal restrictions, and examples of all of these can be found in Egypt. First, laws that mandate internet service providers to establish systems for real-time monitoring and recording of traffic on their networks. This enables continuous surveillance of online activities. Second, legal frameworks that penalize online speech under the guise of protecting national identity, culture, and preventing defamation. This often results in the suppression of dissenting opinions and freedom of expression. Third, VPN Restrictions, which follow the lead of countries like China and Russia to ban or restrict the use of Virtual Private Networks (VPNs). While VPNs are technically legal in Egypt, many VPN servers and websites are blocked, hindering their practical use. Fourth, control over social media companies in various methods. Although Western social media sites remain accessible in Egypt, the government has introduced laws that increase its control over the content shared on these platforms. This is achieved by threatening social media companies with bandwidth restrictions and outright bans if they fail to comply with government requests. Moreover, Egypt’s 2018 Cybercrime Law requires foreign companies handling personal data within the country to designate a representative located in Egypt (Fatafta, 2020).

Despite the Egyptian Constitution guaranteeing freedom of the internet to some extent (for example, Articles 57, 68, 71, and 72), by prohibiting blocking websites, surveilling digital space, and harassing and prosecuting journalists and activists, the authorities continued to develop legislation in this direction and implement it on a large scale. Multiple legislations have been passed and applied to reach above goals. 

The “cybercrime law” in Egypt, signed by President Sisi in 2018, legalizes and reinforces the existing censorship and blocking of websites (Freedom House, 2021). The new law treats all social media accounts with more than 5,000 followers as “media outlets,” making them eligible for censorship (RSF, 2018). The laws also mandated internet service providers to establish a system allowing real-time monitoring and recording of traffic on their networks (Privacy International, 2019). The cybercrime law criminalizes any form of speech that is against ‘national security’ which is defined so broadly that it covers “all that is related to the independence, stability, and security of the homeland and its unity and territorial integrity” and anything to do with the president’s office and all defence and security departments. The law permits the search of citizens’ personal devices and social media accounts can be blocked without judicial authorization, ostensibly for disseminating “false” information or inciting unlawful activities (Manshurat, 2018). Article 2 mandates that service providers retain and store records of their information systems, including all user-related data, for a period of 180 days. This information must be made available to any government agency upon request. Article 7 outlines the procedure for blocking websites that publish content deemed threatening to national security or detrimental to the country’s security or economy. Article 9 grants the Public Prosecutor the authority to issue travel bans and bring individuals accused of violating Article 7 before the Criminal Court. 

The cybercrime law has led to increased penalties and harassment of journalists and activists on social media platforms (Freedom House, 2022). Consequently, there is minimal political opposition in Egypt, as expressing dissenting views on social media can lead to criminal prosecution and harsh punishments. Furthermore, there are significant restrictions and harassment of civil liberties, including freedom of expression, assembly, and the press. Security forces also engage in widespread violations against marginalized groups, including homosexuals and minorities, under the guise of national security concerns.

 The Anti-Terrorism Law, passed in 2015, encompasses broad forms of criminalization and grants extensive powers to address electronic activities, including the arrest of journalists and activists, digital surveillance, and the closure and blocking of websites (Manshurat, 2020). Article 49 of this law empowers the Public Prosecution or relevant investigative authority to halt or block websites specified in Article 29 or any other aspect of online usage outlined in the legislation, as well as to confiscate devices and equipment used in the commission of such offenses. For instance, the Cairo Court of Urgent Matters issued an order to seize and freezethe assets, accounts, and properties of “Mustafa Mukhtar Mohamed Saqr,” the president of “Business News,” the company that owns the two Daily News Egypt websites.

Moreover, at the end of 2022, the Telecom Law amendments were made to expand telecommunication equipment restrictions (Rezk & Hashish, 2023). Now, not only is the importation, manufacturing, assembly of such equipment prohibited without a permit, but also possession, use, operation, installation, or marketing is prohibited without obtaining permission from relevant authorities like the NTRA (The National Telecommunications Regulatory Authority) and national security agencies. The penalty for violating these requirements has been increased to a fine ranging from 2 million to 5 million Egyptian pounds. 

Internet Censorship

Photo: Dreamstime.

According to Access Now, a leading internet research organization, at least 182 internet shutdowns occurred in 34 countries in 2021 (Access Now, 2022). The Mubarak regime famously switched off the country’s internet during the mass protests in Cairo in January 2011. In recent years, however, internet shutdowns have been rare in Egypt. In 2018, the Egyptian Armed Forces ordered a region-wide shutdown of internet and telecommunication services in the Sinai Peninsula and adjacent areas during the army’s military campaign against ISIS-affiliated insurgents in the region (SMEX, 2018). One reason behind the reduction of internet shutdowns is that they are costly as they affect the delivery of essential public and private services and have been dubbed the Dictator’s Digital Dilemma (Hussain, Howard & Agarwal, 2011). Therefore, even when it is practised, the shutdown is limited to a certain location and typically lasts only a few days. According to Access Now (Hernández et al., 2023), no internet shutdown occurred in Egypt in 2021. 

Common methods of censorship, which Deibert et al. (2010) highlighted as “first generation” are filtering and site blocking, which became more common in the late 2000s. IP blocking/filtering and DNS tampering are the common methods of filtering. IP filtering is used to block or filter objectionable content by restricting access to specific IP addresses. Freedom House reported in 2022 that Egypt was a not-free country in relation to the use of digital technologies, ranking it 27 out of 100, identifying three major issues: obstacles to access, limits to contents, and violation of users’ rights (Freedom House, 2022). 

Since the imposition of a “state of emergency” in Egypt in 2017 (Atlantic Council, 2019), which directly granted the authorities the power to impose censorship and monitor all forms of online communication, Egypt blocked over 500 websites (AFTE Egypt, 2020). This includes independent news websites that publish articles criticising the Egyptian government, such as Mada MasrAl-Manassa and Daily News Egypt, in addition to international news websites, such as Al-Jazeera,  Al-Arabiya, and Huffington Post Arabic. The blocking also included well-known Egyptian blogs that had previously warned since Sisi took power that he was rebuilding an authoritarian regime. The banned blogs included Fahmi Huwaidi’s blog (including his column in Shorouk News), Jawdell’s blog, Manal’s blog, Alaa’s blog, Bahia’s blog, and Ahmed Gamal Ziada’s personal blog. Manal and Alaa had previously won awards (Welle, 2005) from Reporters Without Borders. The blocking expands websites that provide content related to human rights and civil society, such as the website of Reporters Without Borders, the Arabic Network for Human Rights Information (ANHRI), the Egyptian Commission for Rights and Freedoms, the Journalists Against Torture Observatory, and the website of Human Rights Watch, one day after the organisation released a report documenting the systematic use of torture in prisons in September 2017. The blocking was not limited to news sites only but also went on to block 261 VPN and proxy sites, including “Tunnelbear,” “CyberGhost,” “HotspotShield,” and messaging application Signal. 

Censorship sometimes occurs via prosecution measures, which come in conjunction with punishing the authors or contributors. Egyptian authorities severely undermined media freedom and the right to access information and punished the publication of opinions on news sites and social media posts. For example, in February 2023, the Public Prosecution referred three journalists (Welle, 2023) from Mada Masr to trial in a case related to publishing a report alleging corruption in the pro-Sisi “Nation’s Future Party,” and in June, the authorities blocked two independent news websites, “Egypt 360” and “The Fourth Estate” (Access Now, 2023). In September 2023, security forces arrested two individuals from their homes in Menoufia and Mansoura governorates after they published tweets on the “X” website, supporting Tantawi and democratic change. In October 2023, the Supreme Council for Media Regulation referred workers (“x.com,” n.d.) at the independent media website “Mada Masr” to the prosecution, with the charge of “practising media activities without a license” and “spreading false news without verifying its sources.”

Authoritarian regimes have tended to use more subtle and insidious forms of censorship, which also use surveillance techniques and rely on quasi-democratic legal mechanisms (Deibert & Rohozinski, 2010). This has included using DPI surveillance technology acquired from Western and Chinese companies, which have become essential sources of diffusion of authoritarian practices. Companies such as Sandvine Corporation, a US-Canadian company, have provided tech to over a dozen countries, including Egypt. DPI is “a type of data processing that looks in detail at the contents of the data being sent and re-routes it accordingly” (Geere, 2012). DPI inspects the data being sent over a network and may take various forms of action, such as logging the content and alerting, as well as blocking or rerouting the traffic. DPI allows comprehensive network analysis. While it can be used for innocuous purposes, such as checking the content for viruses and ensuring the correct supply of content, it can also be used for digital eavesdropping, internet censorship, and even stealing sensitive information (Bendrath & Mueller, 2011).

Urban Surveillance

In addition to digital monitoring, the government has significantly expanded its surveillance capabilities within urban areas. Advanced surveillance systems, including extensive CCTV networks equipped with facial recognition technology, have been deployed. These systems are integrated with AI-powered analytics capable of tracking and identifying individuals, monitoring public gatherings, and analysing behavioural patterns. This pervasive surveillance infrastructure not only deters public dissent but also enables the rapid identification and apprehension of activists and protesters.

Egypt has employed extensive surveillance technologies such as Smart City/Safe City platforms, facial recognition systems, and smart policing, as highlighted in the AI Global Surveillance (AIGS) Index. These technologies have been instrumental in suppressing democratic movements (Wheeler, 2017). During the 2010s, Egypt witnessed increased internet technology adoption and a concurrent decline in democratic practices. Data from the International Telecommunication Union (ITU) indicates a dramatic rise in internet usage in Egypt since 2019, which led the Egyptian government to more investment in urban surveillance.

The aforementioned DPI technology acquired from the American company Sandvine/Procera Networks enabled the Egyptian government to monitor citizens’ internet activities, hack accounts, and reroute internet traffic. This technology allows Telecom Egypt to spy on users and block human rights and political content (Marczak et al., 2018). Additionally, Egypt’s General Intelligence Service has conducted sophisticated cyber-spying operations on opposition and civil society activists by installing software on their phones, granting access to files, emails, GPS coordinates, and contact lists (Bergman, 2019).

Safe or smart cities are another policy that Egypt is undertaking in order to increase its urban surveillance capabilities. The “Smart” concept generally involves gathering large amounts of data to enhance various city functions. This can include optimizing the use of utilities and other services, reducing traffic congestion and pollution, and ultimately empowering both public authorities and residents. According to a Huawei report, “Safe cities are an essential pillar supporting the future development of smart cities” (Hillman & McCalpin, 2019). These cities deploy high-speed communication networks, sensors, and mobile apps to enhance mobility, connectivity, energy efficiency, service delivery, and overall resident welfare (Hong, 2022). Becoming “smart” typically involves harnessing troves of data to optimize city functions—from more efficient use of utilities and other services to reducing traffic congestion and pollution—all with a view to empowering public authorities and residents (Muggah, 2021). With the advance of CCTV and AI technology, urban surveillance capabilities have grown exponentially over the past ten years. Dubbed “safe” or “smart” cities, these urban surveillance projects are “mainly concerned with automating the policing of society using video cameras and other digital technologies to monitor and diagnose suspicious behaviour” (Kynge et al., 2021).

Egypt’s most significant smart city project under the Sisi government is the New Administrative Capital (NAC) east of Cairo (Al-Hathloul, 2022). The NAC is designed with a full suite of smart/safe city solutions, including 6,000 CCTV cameras and a surveillance system by American company Honeywell, which monitors crowds, traffic congestion, theft, and suspicious activities and triggers automated alarms during emergencies (Mourad & Lewis, 2021). Honeywell also has contracts for Saudi Arabia’s NEOM megaproject. Huawei’s presence in Egypt has also been growing. In 2018, Huawei signed a memorandum with Telecom Egypt to establish a $5 million data centre for a cloud computing network, aiming to develop one of the five largest cloud networks globally and the first in MENA. Egypt and Huawei are also negotiating to bring Huawei’s 5G infrastructure to the country (Blaubach, 2021). The surveillance infrastructure includes Schneider Electric’s EcoStruxure platform, which connects various systems for optimization and sustainability (Egypt Today, 2022). 

The development of smart city infrastructures has sparked controversies, with critics arguing that these technologies enable pervasive collection, retention, and misuse of personal data by law enforcement and private companies. The NAC, which is being built by China State Construction Engineering Corporation (CSCEC) (Al-Hathloul, 2022), has been driven by an attempt by the authoritarian Sisi government to isolate and protect itself from a revolutionary scenario that befell the Mubarak regime in 2011. By moving government offices 50 km away from central Cairo and Tahrir Square, the regime aims to ensure its structures are safeguarded even during unrest. All the surveillance capabilities in the NAC will be further helpful in protecting the regime (see Middle East Monitor, 2021; Bergman & Walsh, 2021; Menshawy, 2021).

Strategic Digital Information Operations (SDIOs)

Banners supporting Egyptian President Abdel-Fattah El-Sisi’s bid for a second term during the presidential elections, displayed along the crowded Al Moez Street in the Gamalia district of Cairo, Egypt, on March 25, 2018. Photo: Halit Sadik.

The Egyptian government employs a sophisticated network of SDIOs. SDIOs refer to “efforts by state and non-state actors to manipulate public opinion as well as individual and collective emotions by using digital technologies to change how people relate and respond to events in the world” (Yilmaz et al., 2023). Thus, the Egyptian government does not only rely on randomized acts of internet shutdowns but carefully manipulates and alters the information environment to serve its motives. 

Egypt has begun to move beyond strategies of ‘negative control’ of the internet, in which regimes attempt to block, censor, and suppress the flow of communication and toward strategies of proactive co-optation in which social media serves regime objectives. The opposite of internet freedom, therefore, is not necessarily internet censorship but a deceptive blend of control, co-option, and manipulation. Scholars call this phenomenon ‘flooding’ as the governments try to ‘flood’ the informational space with false, distracting or otherwise worthless pieces of information (Roberts, 2018; Mir et al., 2022). As the public debate is seeded with such disinformation, this makes it hard for the governments’ opponents to convince their supporters and mobilize.

The Egyptian government employs a robust propaganda machine to shape public perception and maintain control over the narrative. This involves the strategic use of state-controlled media, social media platforms, and online influencers to disseminate pro-regime content and discredit opposition. The regime propagates conspiracy theories that portray political dissenters as foreign agents or terrorists, thereby justifying its repressive measures. As Akbarzadeh et al. (2025) demonstrates, “President Abdul Fattah al-Sisi frequently talks about conspiracies against the Arab World and Egypt in particular, thanking Egyptians who stood against these conspiracies and prevented the country from falling in the direction of Iraq, Syria, and Libya, all that were intervened by the US and other Western allies.” In the same way “Sisi used the consequences of the Western role in Iraq, Syria, and Libya as a method to promote his rule in Egypt and scare Egyptians from seeking change in their country, which would lead them to get trapped in conspiracies undertaken in other Middle Eastern countries” (Akbarzadeh et al., 2025).

Egyptian officials commonly instil fear among citizens to ensure their loyalty to the current government, often by amplifying concerns about potential conspiracies against the nation. This rhetoric tends to escalate as elections approach (Akbarzadeh et al., 2025). State-run TV channels, newspapers, and online portals play a crucial role in this information warfare, ensuring that the regime’s message reaches a broad audience. The Sisi regime, for example, employs troll armies to be used in political astroturfing operations. In 2020, Twitter banned over 9,000 accounts that were spreading misleading information. Another report found that the Sisi government used automated/bot accounts to promote its popular hashtags on Twitter (DFRLab, 2023).

The regime usually employs defensive and offensive approaches in this regard. The dual strategy, seamlessly blending defensive and offensive tactics, creates a narrative that reinforces the regime’s image and marginalizes any alternatives, fostering an environment of public trust and unity under the existing leadership.

Defensively, it seeks to portray the regime as a legitimate national authority, emphasising its adherence to the nation’s interests and well-being in a way that no legitimate alternative is imaginable. In these narratives, government leaders are portrayed as heroic figures with exceptional qualities, and the system is presented as flawless and well-suited to the country’s needs. Like many examples Igor Golomstock provided in his book Totalitarian Art (1990), Egyptian propaganda presents the head of state as the father of the nation, and any attempt to criticise him or his authority is introduced as a betrayal to Egypt. Egyptian TV channels frequently host Arab leaders praising Sisi and portraying him as the savour of Egypt and the Arab nation. 

On the offensive front, the propaganda machine works to discredit any alternative to the current regime. Opposition figures or movements are subjected to character assassinations and labelled as traitors, criminals, or foreign agents. Conspiracy theories are propagated, linking opposition figures to nefarious plots or foreign interference, thereby undermining the credibility of opposing narratives. Additionally, the propaganda machine manipulates national unity sentiments to marginalise dissent, presenting the regime as a unifying force and framing opposition as divisive threats to the country’s unity. This comprehensive approach aims to fortify public support for the current regime while systematically diminishing the credibility of dissenting voices. In conjunction with the magnification and glorification of the president’s image, extensive work has been done to demonise the image of the opposition as a whole, generalising all under the unsightly titles of “traitors” cooperating with foreign enemies, “terrorism,” “riot” and “suspicious calls,” slamming all attempts of demonstrations or criticising the government. 

One significant rationale lies in the inherent lack of genuine legitimacy, coupled with a substantial disconnect between the state and society. Consequently, the fabrication of imaginary adversaries becomes a tool for fostering national unity and identity under the regime’s rule. A parallel goal of this strategy is the cultivation of a cult of leadership. Totalitarian regimes craft an image of leaders as defenders against external enemies, fostering a cult of personality that solidifies their control over the narrative and the populace. This narrative, in turn, rallies support for the militarization of both the state and society. Moreover, the identification of enemies becomes a rationale for increased militarization and defence spending. Totalitarian regimes leverage perceived external threats to justify allocating resources to the military, enhancing capabilities, and maintaining control over the security apparatus. Consequently, these regimes effectively maintain fear and control over the population. Ultimately, the perpetual portrayal of an external threat or identification of internal enemies sustains a climate of fear among citizens, discouraging challenges to the regime. 

In authoritarian regimes, conspiracy theories play a crucial role in consolidating power by channelling public discontent toward perceived external or internal threats. These narratives function as propaganda tools, allowing governments to justify repression, delegitimize critics, and deflect attention from governance failures. Unlike in democratic contexts, where conspiracy theories are often propagated by fringe actors, authoritarian regimes institutionalize them, presenting them as official truths that shape political realities. A key tactic involves accusing dissidents of affiliations with groups like the Muslim Brotherhood to suppress freedom of speech, protest, and independent media. By framing opposition figures as existential threats to national unity, regimes cultivate public trust and reinforce their own legitimacy while silencing alternative voices (Akbarzadeh et al., 2025).

Collectively, the sophisticated implementation of SDIOs manipulate feelings of national unity to marginalise the opposition, presenting the regime as a unifying force and framing the opposition as a divisive threat to the country’s unity. This comprehensive approach aims to strengthen popular support for the current regime while systematically diminishing the credibility of opposition voices. The dual strategy, which seamlessly blends defensive and offensive tactics, creates a narrative that enhances the regime’s image and marginalises any alternatives, fostering an environment of public trust and unity under the current leadership.

Diffusion of Authoritarian Practices

Photo: Dreamstime.

Diffusion mechanisms are systematic sets of statements that provide a plausible explanation of how policy decisions in one country are influenced by prior policy choices made in other countries (Braun & Gilardi, 2006; 299). The literature on this topic often highlights areas of convergence and contact points between early and later adopters (see Kerr, 2018). Diffusion is any process where earlier adoption or practice within a population increases the likelihood of adoption among non-adopters (Strang, 1991: 325). It occurs when policy decisions in one country are systematically influenced by previous policy choices in other countries (Dobbin et al., 2007: 787; Gilardi, 2012). Traditionally, research on diffusion has focused on the spread of popular uprisings against autocratic leaders (Koesel & Bunce, 2013; Beissinger, 2007). However, more recently, scholars have shifted their focus to the diffusion of authoritarian practices (Ambrosio, 2010; Bank, 2017). The diffusion process occurs through three main mechanisms: learning, emulation, and cooperative interdependence (Bashirov et al., 2025).

Learning

The process of learning can be driven internally, where actors learn from their own experiences, evaluating and adopting innovations based on the success of prior applications. It can also be externally driven, with an external actor facilitating the learning process. The role of the external actor can range from small, such as selling or installing technological tools, to extensive, involving large-scale activities like seminars and training programs to promote a policy or practice. Using a practice framework, we focus on ‘configurations of actors’ involved in enabling authoritarianism (Michaelsen, 2018). Often, these actors are private companies rather than states. 

Contrary to the perceived active role of Chinese companies, it was Western tech companies that provided most of the high-tech surveillance and censorship capabilities to authoritarian regimes in the Muslim world. Notable examples include the US-Canadian company Sandvine, the Israeli NSO GroupGerman FinFisher, and Finland’s Nokia Networks. Internet surveillance has been facilitated through the cooperation between adopter countries willing to purchase the technology and companies like Sandvine willing to sell it. Sandvine’s willingness is evidenced by the company’s chief technology officer, who stated, “We don’t want to play world police. We believe that each sovereign country should be allowed to set their own policy on what is allowed and what is not allowed in that country” (Gallagher, 2022). 

Regarding external learning, China, along with Chinese and Western private companies, has been leading the promotion of internet censorship practices. China has become a major advocate and a learning source for middle powers in internet surveillance, data fusion, and AI. The Shanghai Cooperation Organization (SCO) has become a crucial platform for these efforts. For instance, at the 2021 SCO summit, Chinese officials led a panel called the Thousand Cities Strategic Algorithms, training an international audience, including many representatives from developing countries, on creating a “national data brain” that integrates various forms of financial and personal data and employs artificial intelligence for analysis. According to the SCO website, 50 countries are involved in discussions with the Thousand Cities Strategic Algorithms initiative (Ryan-Mosley, 2022). China has also been proactive in offering media and government training programs to representatives from countries affiliated with the Belt and Road Initiative (BRI). A notable example includes the Chinese Ministry of Public Security directing Meiya Pico, a Chinese cybersecurity company, to train government representatives from Turkey, Pakistan, Egypt, and other nations on digital forensics (see Weber, 2019: 9-11).

Russia is another leading source of diffusion of digital authoritarianism in the Middle East. Russia’s brazen attempts at disinformation and propaganda lend support to the emergence of digital manipulation as an acceptable practice across authoritarian countries. By demonstrating the effectiveness of disinformation campaigns and propaganda – such as Russian interference in US presidential elections in 2016 – the country has shown other regimes that similar tactics can be used to control their own populations and advance their interests (Day, 2022). 

The role in the diffusion of digital authoritarian practice in the Middle East is not limited to China and Russia. Western countries, in fact, played significant roles as well. Despite Huawei’s involvement in projects like the $5 million data centre with Telecom Egypt and discussions about 5G infrastructure, Egypt has shown a preference for Western technology in its major smart city projects, like the New Administrative Capital (NAC). The adoption of urban surveillance capabilities in Egypt is thus a result of both internal and external learning mechanisms. The Sisi regime’s strategies, especially in the NAC, reflect an attempt to insulate the government from potential unrest. 

US-Canadian company Sandvine/Procera has provided DPI surveillance equipment (hardware and software) to national networks operating in Egypt (Telecom Egypt). This system operates over connections between an internet site and the target user and allows the government to tamper with the data sent through an unencrypted network (HTTP vs. HTTPS). Moreover, recent revelations show that the company has played a significant role in facilitating the spread of ideas between countries. In an internal newsletter sent to employees, Sandvine Chief Technical Officer Alexander Haväng wrote Sandvine’s equipment could “show who’s talking to who, for how long, and we can try to discover online anonymous identities who’ve uploaded incriminating content online.” Through their information campaign, Sandvine contributed to learning by governments. In Egypt, the government has been using Sandvine’s devices “to block dozens of human rights, political, and news websites, including Human Rights Watch, Reporters Without Borders, Al Jazeera, Mada Masr, and HuffPost Arabic” (Marczak et al., 2018: 8).

Emulation

Emulation can be defined as “the process whereby policies diffuse because of their normative and socially constructed properties instead of their objective characteristics” (Gilardi 2012: 467). Research has shown that in complex and uncertain environments, policymakers respond by emulating the structural models of recognized leaders in the domain (Barnett & Finnemore, 2005). This behaviour is primarily driven by the pursuit of legitimacy and harmonization. International organizations, both governmental and non-governmental, play a crucial role in spreading commonly accepted standards of behaviour and organizational structures among countries. 

Emulation has been significant in the diffusion of legal norms regarding internet restrictions and, to a lesser extent, in adopting Chinese urban surveillance infrastructures. Chinese corporations have established training hubs and research initiatives to disseminate expertise in artificial intelligence, internet surveillance, and digital space management (Kurlantzick, 2022). For instance, Huawei set up an OpenLab in Egypt in 2017, focusing on smart city, public safety, and smart government solutions. China has been a major promoter of the ‘safe city’ concept, which focuses on surveillance-driven policing of urban environments. This approach has been refined in many Chinese cities (Triolo, 2020). Companies such as HuaweiZTE CorporationHangzhou Hikvision Digital TechnologyZhejiang Dahua TechnologyAlibaba, and Tiandy are leading the export of this model (Yan, 2019). 

Moreover, homophily, in the form of cultural and political alignment, as well as China’s emergence as an authoritarian role model, contributed to the emulation process. Homophily among actors played an important role, as actors prefer to emulate models from reference groups with whom they share similar cultural or social attributes (Elkins & Simmons, 2005). Political alignment and proximity among nations foster communication and the exchange of information (Rogers, 2010). This dynamic is observed between China and Russia and political regimes in the Muslim world including Egypt, which are susceptible to varying degrees of authoritarian governance. Loan conditionalities and trade negotiations within the context of China’s Belt and Road Initiative (BRI) have also played a role in enabling the spread of censorship and surveillance technologies from China to the Muslim world. 

The Egyptian government has gathered widespread spying and phishing capabilities sourced from mostly Western companies. An obscure wing of the General Intelligence Directorate called the Technical Research Department (TRD) has purchased equipment from Finland-based Nokia-Siemens Networks (now Nokia Networks) that permits dial-up internet connection, enabling users to access the internet even if the primary national infrastructure is offline. Furthermore, Nokia Siemens Networks has provided the Egyptian government with an interception management system and a surveillance hub for fixed and mobile networks, granting the government mass surveillance capabilities to intercept phone communications (Privacy International, 2019). Another company involved in Egypt was the Italian surveillance technology company Hacking Team. In 2015, the latter was contracted by both the TRD (Technical Research Department) affiliated with Egyptian intelligence, and the Mansour Group (a conglomerate belonging to the second richest family in Egypt) to provide malware that grants the attacker complete control of the target computer (Privacy International, 2019). 

In a brazen example of emulation of the practices of other authoritarian states, the Egyptian government started a widespread phishing campaign called Nile Phish in 2016 against the country’s civil society organizations implicated in the Case 173 crackdown (Scott-Railton et al., 2017). The campaign involved sending predatory emails and text messages to members of civil society to hack into their devices and accounts. An Amnesty International Report (2020) revealed that the Egyptian government used spying technology called FinSpy supplied by German company FinFisher Gmbh. FinSpy is a computer spyware suite sold exclusively to governments to monitor and intercept internet traffic, as well as to initiate phishing attacks against targeted users. FinSpy Trojan has been in use in Egypt to spy on opposition movements and enable the surveillance of political activists and journalists (ECCHR, 2023). In addition, denial-of-service (DoS) or packet injection practices are common in Egypt. For example, between May and September 2023, former Egyptian MP Ahmed Eltantawy was targeted by Cytrox’s Predator Spyware via links sent on SMS and WhatsApp. Eltantawy had announced he would be running in the 2024 presidential elections. Citizen Lab found that the network injection attack could be attributed to the Egyptian government and Sandvine’s PacketLogic product (Marczak et al., 2018).

Cooperative Interdependence

The practice of cooperative interdependence in the context of digital technologies refers to how internet censorship and surveillance are enabled through collaboration among adopting countries and state actors and private companies like Sandvine and NSO Group. Both Sandvine and NSO Group have faced significant controversy in their home countries, the US and Israel, over selling surveillance products to authoritarian regimes in the Middle East and beyond, Egypt in particular as explained in this report. NSO Group has been banned by the Israeli government from selling its products to major clients in the Middle East, including Saudi Arabia and the UAE (Staff, 2021). Similarly, Sandvine ceased operations in Russia following US sanctions after Russia’s invasion of Ukraine in 2022 and was forced to stop selling equipment to Belarus after reports revealed its technology was used by the Lukashenko regime to suppress protests in 2021 (Gallagher, 2022).

The broad process of digital authoritarian diffusion has created cooperative interdependence between the involved parties. Through cooperation with global actors, both corporate and state-level, Egyptian governments have imported sophisticated technologies enabling comprehensive internet and urban surveillance. Cooperative interdependence occurs when the policy choices of some governments create externalities that others must consider, leading to mutual benefits from adopting compatible policies (Braun & Gilardi, 2006). This dynamic incentivizes decision-makers to adopt policies chosen by others, enhancing efficiency and yielding mutual benefits. Here, China leverages its Digital Silk Road (DSR) under the BRI to promote the adoption of its technological infrastructure and accompanying surveillance and censorship policies (Hillman, 2021). 

For instance, at the 2017 World Internet Conference in China, representatives from Egypt, Turkey, Saudi Arabia, and the UAE signed a “Proposal for International Cooperation on the ‘One Belt, One Road’ Digital Economy” to construct the DSR, enhancing digital connectivity and e-commerce cooperation (Laskai, 2019). Core components of the DSR include smart cities, internet infrastructure, and mobile networks. Rather than forcing these countries to adopt internet censorship practices, China alters the incentive structures of BRI-connected states. Financial incentives, coupled with technology transfer, promote China’s practical approach to managing cyberspace. The DSR’s digital projects—such as 5G networks, smart cities, fibre optic cables, data centres, satellites, and connecting devices—have commercial value and strategic benefits, helping China achieve its geoeconomic and geopolitical objectives by promoting digital authoritarian practices and its internet governance model (Malena, 2021; Tang, 2020). 

Conclusion

Photo: Hannu Viitanen.

This research has demonstrated the mechanisms through which digital authoritarian practices diffuse in Egypt. We found that Egypt has enacted multiple policies, including restrictive legal frameworks, internet censorship, urban surveillance, and strategic digital information operations (SDIOs), to reclaim the digital space from opposition and civil society, thereby entrenching digital authoritarianism in the country. The models adopted by the Egyptian regime closely emulate China and Russia’s paradigms of internet sovereignty and information control. China’s extensive political and economic linkages with Egypt, its strategic role in regional economies, and its leadership in forums like the Shanghai Cooperation Organization (SCO) have facilitated this trend. Through initiatives such as the Belt and Road Initiative (BRI), China has exported its digital governance model while positioning itself as a global leader in information technology (Ryan-Mosley, 2022; Weber, 2019).

The diffusion of surveillance and censorship technologies also reflects a complex learning process involving both state and corporate actors. While China has played a critical role in promoting internet censorship practices, private Western companies have equally enabled Egypt’s digital authoritarian turn. Companies such as Sandvine, NSO Group, FinFisher, and Nokia Networks have supplied surveillance infrastructure independently of state policy, a departure from conventional diffusion literature that associates such practices with national strategic interests (Gallagher, 2022; Marczak et al., 2018; Privacy International, 2019). For instance, Sandvine’s DPI technology has been used in Egypt to block dozens of news and human rights websites, while its executives openly dismiss responsibility by deferring to national sovereignty (Gallagher, 2022). This corporate-led diffusion challenges the notion that digital authoritarianism is solely state-driven and reveals an under-regulated global market in repressive technologies.

Our findings have three broader implications. First, while Chinese influence is significant, the role of Western technology firms in enabling authoritarian diffusion should not be underestimated. Their operations in Egypt have not been directly aligned with their home states’ policies, contradicting earlier findings that firms facilitating authoritarian practices often act under state guidance (Arslan, 2022). Second, these private firms are not only exporters of tools but are actively involved in implementing government-sanctioned strategies, including malware distribution and interception systems (Appuhami et al., 2011; Teets & Hurst, 2014). Third, the study identifies the mechanisms of diffusion—learning, emulation, and cooperative interdependence—as key to understanding how regimes adapt digital authoritarian tactics to shifting political and technological contexts (Braun & Gilardi, 2006; Dobbin et al., 2007; Gilardi, 2012; Strang, 1991; Kerr, 2018).

Developing states may increasingly adopt practices such as national firewalls, smart city surveillance, and social credit systems modelled on early adopters like China and Russia. As they become embedded in transnational authoritarian networks—whether through SCO summits or Digital Silk Road initiatives—these regimes are incentivized to replicate practices that strengthen regime durability and evade democratic scrutiny (Hillman, 2021; Malena, 2021; Tang, 2020; Laskai, 2019).

Given these trends, addressing the entrenchment and diffusion of digital authoritarianism requires a coordinated, multi-level response. There is an urgent need to institutionalize international cyber norms and regulations that clearly define and prohibit practices such as mass surveillance, politically motivated internet shutdowns, and spyware exports. Multilateral institutions, including the United Nations and the European Union, must lead the effort to develop enforceable standards, promote transparency, and strengthen export control regimes. This would include holding corporations accountable through mandatory human rights due diligence, transparency disclosures, and legal sanctions when they contribute to repression.

Defending digital rights also requires robust national privacy protections and support for civil society organizations operating under authoritarian conditions. These groups need financial resources, digital tools, and international solidarity to resist surveillance, educate the public, and pursue legal redress where possible. Supporting democratic actors in repressive environments is essential for countering the normalization of authoritarian digital governance.

Private companies must no longer operate in a legal and ethical vacuum. Regulatory mechanisms should ensure that firms exporting surveillance technologies are held accountable for complicity in human rights violations. Public pressure campaigns and state-level policy interventions—such as targeted sanctions or procurement restrictions—can help enforce these norms. At the same time, incentives should be offered for ethical innovation and secure technology development that supports open societies.

International cooperation among democracies must deepen through the sharing of intelligence, technologies, and best practices in countering cyber repression and disinformation. Cross-national partnerships can create rapid response frameworks to detect and disrupt strategic digital information operations. Capacity-building programs should support governments seeking to manage their digital ecosystems in ways that uphold civil liberties and protect against authoritarian creep.

Economic leverage should be strategically employed. Trade policies, investment frameworks, and development aid must be conditioned on adherence to digital rights standards. This includes shifting financial relationships away from authoritarian technology providers and toward partners committed to democratic norms. Financial institutions and donor agencies must integrate digital governance benchmarks into their programming.

Diplomacy should play a more assertive role in exposing and isolating regimes that abuse digital technologies. Bilateral engagements, international resolutions, and public diplomacy should be used to condemn repressive practices, promote digital transparency, and advocate for global standards of accountability. Countries like Egypt must be pressured to reform not only through external criticism but through coordinated global action that combines legal, economic, and diplomatic tools.

In conclusion, the diffusion of digital authoritarianism is a multi-dimensional and complex phenomenon driven by both state and corporate actors, operating through networks of learning, emulation, and cooperative interdependence. The Egyptian case exemplifies how these processes work in practice and the urgent need for a sustained, global response. Confronting this challenge will require a blend of regulation and resistance, innovation and accountability, diplomacy and solidarity. Only through such an approach can the digital realm be reclaimed as a space of freedom, rights, and democratic resilience.


 

Funding: This work was supported by the Gerda Henkel Foundation, AZ 01/TG/21, Emerging Digital Technologies and the Future of Democracy in the Muslim World.


Authors

Ihsan Yilmaz is Deputy Director (Research Development) of the Alfred Deakin Institute for Citizenship and Globalisation (ADI) at Deakin University, where he also serves as Chair in Islamic Studies and Research Professor of Political Science and International Relations. He previously held academic positions at the Universities of Oxford and London and has a strong track record of leading multi-site international research projects. His work at Deakin has been supported by major funding bodies, including the Australian Research Council (ARC), the Department of Veterans’ Affairs, the Victorian Government, and the Gerda Henkel Foundation.

(*) Ali Mamouri is a scholar and journalist specializing in political philosophy and theology. He is currently a Research Fellow at the Alfred Deakin Institute for Citizenship and Globalisation at Deakin University. With an academic background, Dr. Mamouri has held teaching positions at the University of Sydney, the University of Tehran, and Al-Mustansiriyah University, as well as other institutions in Iran and Iraq. He has also taught at the Qom and Najaf religious seminaries. From 2020 to 2022, he served as a Strategic Communications Advisor to the Iraqi Prime Minister, providing expertise on regional political dynamics. Dr. Mamouri also has an extensive career in journalism. From 2016 to 2023, he was the editor of Iraq Pulse at Al-Monitor, covering key political and religious developments in the Middle East. His work has been featured in BBC, ABC, The Conversation, Al-Monitor, and Al-Iraqia State Media, among other leading media platforms. As a respected policy analyst, his notable works include “The Dueling Ayatollahs: Khamenei, Sistani, and the Fight for the Soul of Shiite Islam” (Al-Monitor) and “Shia Leadership After Sistani” (Washington Institute). Beyond academia and journalism, Dr. Mamouri provides consultation to public and private organizations on Middle Eastern affairs. He has published several works in Arabic and Farsi, including a book on the political philosophy of Muhammad Baqir Al-Sadr and research on political Salafism. Additionally, he has contributed to The Great Islamic Encyclopedia and other major Islamic encyclopedias.

(**) Shahram Akbarzadeh is Convenor of Middle East Studies Forum (MESF) and Professor of International Politics, Deakin University (Australia). He held a prestigious ARC Future Fellowship (2013-2016) on the Role of Islam in Iran’s Foreign Policy-making and recently completed a Qatar Foundation project on Sectarianism in the Middle East. Professor Akbarzadeh has an extensive publication record and has contributed to the public debate on the political processes in the Middle East, regional rivalry and Islamic militancy. In 2022 he joined Middle East Council on Global Affairs as a Non-resident Senior Fellow. 

(***) Muhammad Omer is a PhD student in political science at the Deakin University. His PhD is examining the causes, ideological foundations, and the discursive construction of multiple populisms in a single polity (Pakistan). His other research interests include transnational Islam, religious extremism, and vernacular security. He previously completed his bachelor’s in politics and history from the University of East Anglia, UK, and master’s in political science from the Vrije University Amsterdam. 


 

References

Access Now. (2023). “مصر: جماعات حقوقية تدين الحجب الأخير لموقعين إخباريين –” https://www.accessnow.org/press-release/مصر-جماعات-حقوقية-تدين-الحجب-الأخير-لم/

Adler, Emanuel, and Vincent Pouliot. (2011). “International practices.” International theory 3(1): 1-36.

AFTE Egypt. (2020). “قائمة المواقع المحجوبة في مصر.” https://afteegypt.org/blocked-websites-list-ar

Ahmed, Zahid Shahab; Yilmaz, Ihsan; Akbarzadeh, Shahram & Bashirov, Galib. (2023). “Digital Authoritarianism and Activism for Digital Rights in Pakistan.” European Center for Populism Studies (ECPS). July 20, 2023. https://doi.org/10.55271/rp0042

Akbarzadeh, S.; Mamouri, A.; Bashirov, G., & Yilmaz, I. (2025). Social media, conspiracy theories, and authoritarianism: between bread and geopolitics in Egypt. Journal of Information Technology & Politics, 1–14. https://doi.org/10.1080/19331681.2025.2474000

Akbarzadeh, Shahram, Amin Naeni, Ihsan Yilmaz, and Galib Bashirov. (2024). “Cyber Surveillance and Digital Authoritarianism in Iran.” Global Policy, March 14, 2024.
https://www.globalpolicyjournal.com/blog/14/03/2024/cyber-surveillance-and-digital-authoritarianism-iran.

Al-Hathloul, Lina. (2022). “Dictators in Egypt and Saudi Arabia Love Smart Cities Projects — Here’s Why.” AccessNowhttps://www.accessnow.org/smart-cities-projects/

Ambrosio, Thomas, and Jakob Tolstrup. (2019). “How Do We Tell Authoritarian Diffusion from Illusion? Exploring Methodological Issues of Qualitative Research on Authoritarian Diffusion.” Quality & Quantity 53(6): 2741-2763.

Ambrosio, Thomas. (2010). “Constructing a Framework of Authoritarian Diffusion: Concepts, Dynamics, and Future Research.” International Studies Perspectives 11(4): 375-392.

Amnesty International. (2020). “German-made FinSpy spyware found in Egypt, and Mac and Linux versions revealed.” https://www.amnesty.org/en/latest/research/2020/09/german-made-finspy-spyware-found-in-egypt-and-mac-and-linux-versions-revealed/

Anderson, Janna, and Lee Rainie. (2020). “Many Tech Experts Say Digital Disruption Will Hurt Democracy.” Pew Research Center.   https://www.pewresearch.org/internet/2020/02/21/many-tech-experts-say-digital-disruption-will-hurt-democracy/

Appuhami, Ranjith; Perera, Sujatha and Perera, Hector. (2011). “Coercive policy diffusion in a developing country: The case of public-private partnerships in Sri Lanka.” Journal of Contemporary Asia 41(3): 431-451.

Arslan, Melike. (2022). “Legal Diffusion as Protectionism: The Case of the US Promotion of Antitrust laws.” Review of International Political Economy 1-24. https://doi.org/10.1080/09692290.2022.2158118

Atlantic Council. (2019). “The State of Emergency in Egypt: An Exception or Rule?” https://www.atlanticcouncil.org/blogs/menasource/the-state-of-emergency-in-egypt-an-exception-or-rule/

Bank, André. (2017). “The Study of Authoritarian Diffusion and Cooperation: Comparative Lessons on Interests Versus Ideology, Nowadays and in History.” Democratization 24(7): 1345-1357.

Barnett, Michael., and Finnemore, Martha. (2005). “The power of liberal international organizations.” Power in Global Governance 161: 163-171.

Bashirov, G.; Akbarzadeh, S.; Yilmaz, I. and Ahmed, Z. (2025). “Diffusion of Digital Authoritarian Practices in China’s Neighbourhood: The Cases of Iran and Pakistan.” Democratization, DOI: 10.1080/13510347.2025.2504588

Beissinger, Mark. (2007). “Structure and Example in Modular Political Phenomena: The Diffusion of Bulldozer, Rose, Orange and Tulip Revolutions.” Perspectives on Politics 5(2): 259–76.

Bendrath, Ralf, and Mueller, Milton. (2011). “The End of the Net as We Know It? Deep Packet Inspection and Internet Governance.” New Media & Society 13(7): 1142-1160.

Bergman, Ronen, and Walsh, Declan. (2019). “Egypt Is Using Apps to Track and Target Its Citizens, Report Says.” The New York Timeshttps://www.nytimes.com/2019/10/03/world/middleeast/egypt-cyber-attack-phones.html

Blaubach, Thomas. (2021). “Chinese Technology in the Middle East: A Threat to Sovereignty or an Economic Opportunity?” MEI Policy Center

Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. New York: Oxford University Press.

Braun, Dietmar, and Gilardi, Fabrizio. (2006). “Taking ‘Galton’s Problem’ Seriously: Towards a Theory of Policy Diffusion.” Journal of theoretical politics 18(3): 298-322. 

Breuer, Anita. (2012). “The Role of Social Media in Mobilizing Political Protest: Evidence from the Tunisian Revolution.” German Development Institute Discussion Paper 10: 1860-0441.

Cattle, Amy E. (2015). “Digital Tahrir Square: An Analysis of Human Rights and the Internet Examined through the Lens of the Egyptian Arab Spring.” Duke J. Comp. & Int’l L. 26: 417.

Damnjanović, I. (2015). “Polity without Politics? Artificial Intelligence versus Democracy: Lessons from Neal Asher’s Polity Universe.” Bulletin of Science, Technology & Society35(3-4), 76-83.

Day, Jones. (2022). “China Amends Anti-Monopoly Law: What You Need to Know.” Jones Dayhttps://www.jonesday.com/en/insights/2022/07/china-amends-antimonopoly-law

Deibert, Ronald, John Palfrey, Rafal Rohozinski, and Jonathan L. Zittrain. (2010). “Access Controlled: The Shaping of Power, Rights, and Rule in Cyberspace.” The MIT Press.

DFRLab. (2023). “Egyptian Twitter Network Amplifies Pro-Government Hashtags, Attacks Fact-checkers – DFRLab.” DFRLabhttps://dfrlab.org/2023/03/23/egyptian-twitter-network-amplifies-pro-government-hashtags-attacks-fact-checkers/

Diamond, Larry, and Marc F. Plattner, eds. (2012). Liberation Technology: Social Media and the Struggle for DemocracyJHU Press.

Dobbin, Frank, Beth Simmons, and Geoffrey Garrett. (2007). “The Global Diffusion of Public Policies: Social Construction, Coercion, Competition, or Learning?” Annual Review of Sociology. 33: 449-472.

Dragu, Tiberiu, and Yonatan Lupu. (2021). “Digital Authoritarianism and the Future of Human Rights.” International Organization 75(4): 991-1017.

Durac, Vincent, and Francesco Cavatorta. (2022). Politics and Governance in the Middle East. Bloomsbury Publishing.

Egypt Today. (2022). “COP27: TMG, Schneider Partner to Provide Latest Smart Solutions, Sustainability Standards in Noor City.” Egypt Today. https://www.egypttoday.com/Article/6/120619/COP27-TMG-Schneider-partner-to-provide-latest-smart-solutions-sustainability

Elkins, Zachary, and Beth Simmons. (2005). “On waves, clusters, and diffusion: A conceptual framework.” The Annals of the American Academy of Political and Social Science 598(1): 33-51.

ECCHR (European Center for Constitutional and Human Rights). (2023). https://www.ecchr.eu/en/

Fatafta, Marwa. (2020). “Egypt’s new data protection law: data protection or data control?”. Access Nowhttps://www.accessnow.org/egypts-new-data-protection-law-data-protection-or-data-control/

Feldstein, Steven. (2019). The Global Expansion of AI Surveillance. Washington, DC: Carnegie Endowment for International Peacehttps://carnegieendowment.org/2019/09/17/global-expansion-of-ai-surveillance-pub-79847

Feldstein, Steven. (2021). The Rise of Digital Repression: How Technology Is Reshaping Power, Politics, and Resistance. Oxford University Press.

Freedom House. (2021). “Freedom on the Net 2020 Report.” https://freedomhouse.org/sites/default/files/2020-10/10122020_FOTN2020_Complete_Report_FINAL.pdf accessed: 1/3/2021.

Freedom House (2022). “Egypt.” https://freedomhouse.org/country/egypt/freedom-net/2022.

Freedom House. (2022). “Freedom in the World.” https://freedomhouse.org/report/freedom-world

Freedom House. (2022). “Freedom on the Net.” https://freedomhouse.org/report/freedom-net

Gallagher, Ryan. (2022). “Sandvine Pulls Back from Russia as US, EU Tighten Control on Technology It Sells.” Bloomberghttps://www.bloomberg.com/news/articles/2022-06-03/sandvine-pulls-back-from-russia-as-us-eu-tighten-control-on-technology-it-sells

Gardels, Nathan, & Berggruen, Nicolas. (2019). Renovating Democracy: Governing in the Age of Globalization and Digital Capitalism. Berkeley: University of California Press.

Geere, Duncan. 2012. “How Deep Packet Inspection Works.” Wired. https://www.wired.co.uk/article/how-deep-packet-inspection-works

Gilardi, Fabrizio. (2010). “Who learns from what in policy diffusion processes?” American Journal of Political Science.54(3): 650-666.

Gilardi, Fabrizio. (2012). “Transnational Diffusion: Norms, Ideas, and Policies.” Handbook of International Relations. 2: 453-477.

Golomshtok, Igor. (1990). Totalitarian Art in the Soviet Union, the Third Reich, Fascist Italy, and the People’s Republic of Chinahttps://ci.nii.ac.jp/ncid/BA21226005

Helbing, Dirk., et al. (2019). “Will Democracy Survive Big Data and Artificial Intelligence?” Scientific Americanhttps://www.scientificamerican.com/article/will-democracy-survive-big-data-and-artificial-intelligence/

Hellmeier, Sebastian. (2016). “The dictator’s digital toolkit: Explaining Variation in Internet Filtering in Authoritarian Regimes.” Politics & Policy. 44(6): 1158-1191.

Hernández, Marianne Díaz, Felicia Anthonio, Sage Cheng, and Alexia Skok. (2023). “Internet Shutdowns in 2021: The Return of Digital Authoritarianism.” Access Nowhttps://www.accessnow.org/internet-shutdowns-2021/

Hillman, Jonathan E. (2021). The Digital Silk Road: China’s Quest to Wire the World and Win the Future. Profile Books. 

Hillman, Jonathan E, and Maesea McCalpin. (2019). “Watching Huawei’s safe cities”. Center for Strategic and International Studies (CSIS).

Hong, Caylee. (2022). “Safe Cities in Pakistan: Knowledge Infrastructures, Urban Planning, and the Security State.” Antipode 54(5): 1476-1496.

Human Rights Watch. (2017). “We Do Unreasonable Things Here’ Torture and National Security in al-Sisi’s Egypt.” https://www.hrw.org/sites/default/files/report_pdf/egypt0917_web.pdf

Julien, Giry and Doğan Gürpınar. (2020). “Functions and uses of conspiracy theories in authoritarian regimes.” In: Routledge Handbook of Conspiracy Theories.

Kerr, Jaclyn A. (2018). “Authoritarian Practices in the Digital Age| Information, Security, and Authoritarian Stability: Internet Policy Diffusion and Coordination in the Former Soviet Region.” International Journal of Communication 12: 21.

Khalil, Lydia. (2020). “Digital authoritarianism, China and COVID.” Lowy Institute. 

Koesel, Karrie J., and Valerie J. Bunce. (2013). “Diffusion-proofing: Russian and Chinese Responses to Waves of Popular Mobilizations Against Authoritarian Rulers.” Perspectives on Politics. 11(3): 753-768.

Kurlantzick, Joshua. (2020). “China’s Digital Silk Road Initiative: A Boon for Developing Countries or a Danger to Freedom?” The Diplomathttps://thediplomat.com/2020/12/chinas-digital-silk-road-initiative-a-boon-for-developing-countries-or-a-danger-to-freedom

Kynge, James, Valerie Hopkins, Helen Warrell, and Kathrin Hille. (2021). “Exporting Chinese Surveillance: The Security Risks of ‘Smart Cities’.” Financial Times. https://www.ft.com/content/76fdac7c-7076-47a4-bcb0-7e75af0aadab

Laskai, Lorand. (2019). “How China Is Supplying Surveillance Technology and Training Around the World.” Privacy International

Lilkov, Dimitar. (2020). “Made in China: Tackling Digital Authoritarianism.” European View 19(1): 110-110.

Malena, Jorge. (2021). “The extension of the digital silk road to Latin America: Advantages and potential risks.” Brazilian Center for International Relations.

Marczak, Bill, Jakub Dalek, Sarah McKune, Adam Senft, John Scott-Railton, and Ron Deibert. (2018). “Bad Traffic: Sandvine’s PacketLogic Devices Used to Deploy Government Spyware in Turkey and Redirect Egyptian Users to Affiliate Ads?” The Citizen Labhttps://citizenlab.ca/2018/03/bad-traffic-sandvines-packetlogic-devices-deploy-government-spyware-turkey-syria/

Marczak, Bill., et al. (2018). “Bad Traffic: Sandvine’s Packet Logic Devices Used to Deploy Government Spyware in Turkey and Redirect Egyptian Users to Affiliate Ads?” The Citizen Labhttps://citizenlab.ca/2018/03/bad-traffic-sandvines-packetlogic-devices-deploy-government-spyware-turkey-syria/

Mare, Admire. (2020). “Internet Shutdowns in Africa| State-Ordered Internet Shutdowns and Digital Authoritarianism in Zimbabwe.” International Journal of Communication14, 20.

Mansgurat. (2018). “قانون مكافحة جرائم تقنية المعلومات.” Manshurathttps://doi.org/https://manshurat.org/node/31487.

Menshawy, Mustafa. (2021). “Why Is Egypt Building a New Capital?” Al-Jazeera.https://www.aljazeera.com/opinions/2021/7/5/why-is-egypt-building-a-new-capital.

Michaelsen, Marcus. (2018). “Transforming Threats to Power: The International Politics of Authoritarian Internet Control in Iran.” International Journal of Communication. 12: 3856-3876.

Mir, Asfandyar, Tamar Mitts and Paul Staniland. (2022). “Political Coalitions and Social Media: Evidence from Pakistan.” Perspectives on Politics, 1-20.

Mourad, Mahmoud and Aidan Lewis. (2021). “From creaking Cairo, Egypt Plans High-tech Leap with New Capital.” Reuters. https://www.reuters.com/world/middle-east/creaking-cairo-egypt-plans-high-tech-leap-with-new-capital-2021-09-02/.

Muggah, Robert. (2021). “Digital Privacy Comes at a Price.” Agenda. https://www.weforum.org/agenda/2021/09/how-to-protect-digital-privacy

Polyakova, A., & Meserole, C. (2019). “Exporting digital authoritarianism: The Russian and Chinese models.” Policy Brief, Democracy and Disorder Series, Brookings, 1-22.

Privacy International. (2019). “State of Privacy Egypt.” https://privacyinternational.org/state-privacy/1001/state-privacy-egypt.

Radavoi, C. N. (2019). “The Impact of Artificial Intelligence on Freedom, Rationality, Rule of Law and Democracy: Should We Not Be Debating It?” Texas Journal on Civil Liberties & Civil Rights25, 107.

Rezk, Farida, and Mohamed Hashish. (2023). “In Brief: Telecoms Regulation in Egypt.” Lexologyhttps://www.lexology.com/library/detail.aspx?g=85c424f1-84bb-4288-8d48-df69c913cbc9#:~:text=The%20Telecom%20Law%20was%20amended,a%20permit%20from%20the%20relevant.

Roberts, Margaret. (2018). Censored: Distraction and Diversion Inside China’s Great Firewall. Princeton: Princeton University Press.

Routledge Handbook of Conspiracy Theories. (2020). Routledge eBooks. https://doi.org/10.4324/9780429452734

Rogers, Everett M. (2010). Diffusion of innovations. Simon and Schuster.

RSF. (2018). “Egypt’s New Cybercrime Law Legalizes Internet Censorship.” https://rsf.org/en/egypt-s-new-cybercrime-law-legalizes-internet-censorship.

Ruijgrok, Kris. (2017). “From the Web to the Streets: Internet and Protests Under Authoritarian Regimes.” Democratization. 24(3): 498-520.

Ryan-Mosley, Tate. (2022). “The world is moving closer to a new cold war fought with authoritarian tech.” MIT Technology Reviewhttps://www.technologyreview.com/2022/09/22/1059823/cold-war-authoritarian-tech-china-iran-sco/?truid=%2A%7CLINKID%7C%2A

Scott-Railton, John, Bill Marczak, Ramy Raoof, and Etienne Maynier. (2017). “Nile Phish: Large-Scale Phishing Campaign Targeting Egyptian Civil Society.” The Citizen Lab. https://citizenlab.ca/2017/02/nilephish-report

Social Media Exchange (SMEX). (2018). “In Egypt’s Sinai Peninsula, Network Shutdowns Leave Civilians Unreachable — and Unable to Call for Help.” Global Voices. February 14, 2018. https://globalvoices.org/2018/02/14/in-egypts-sinai-peninsula-network-shutdowns-leave-civilians-unreachable-and-unable-to-call-for-help/.

Statista. (2024). “Egypt: Number of Internet Users.” https://www.statista.com/statistics/462957/internet-users-egypt/

Stepan, Alfred, eds. (2018). Democratic transition in the Muslim world: a global perspective (Vol. 35). Columbia University Press.

Stone, P., et al. (2016). One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel. Stanford: Stanford University Press. http://ai100.stanford.edu/2016-report

Strang, David. (1991). “Adding Social Structure to Diffusion Models: An Event History Framework.” Sociological Methods & Research 19(3): 324-353.

Tang, Min. (2020). “Huawei Versus the United States? The Geopolitics of Exterritorial Internet Infrastructure.” International Journal of Communication14, 22.

Taylor, Monique. (2022). “China’s Digital Authoritarianism Goes Global.” In: China’s Digital Authoritarianism: A Governance Perspective, pp. 111-130. Cham: Springer International Publishing.

Teets, Jessica C, and William Hurst. (2014). “Introduction: The Politics and Patterns of Policy Diffusion in China.” In: Local Governance Innovation in China, pp. 1-24. Routledge.

Triolo, Paul. (2020). “The Digital Silk Road: Expanding China’s Digital Footprint.” Eurasia Grouphttps://www.eurasiagroup.net/files/upload/Digital-Silk-Road-Expanding-China-Digital-Footprint.pdf

Weber, Valentin. (2019). “The Worldwide Web of Chinese and Russian Information Controls.” Center for Technology and Global Affairs, University of Oxford.

Welle, Deutsche. (2005). “مدونة ‘دلو معلومات منال وعلاء’ تفوز بجائزة منظمة مراسلون بلا حدود.” dw.com.https://www.dw.com/ar/%D9%85%D8%AF%D9%88%D9%86%D8%A9-%D8%AF%D9%84%D9%88-%D9%85%D8%B9%D9%84%D9%88%D9%85%D8%A7%D8%AA-%D9%85%D9%86%D8%A7%D9%84-%D9%88%D8%B9%D9%84%D8%A7%D8%A1-%D8%AA%D9%81%D9%88%D8%B2-%D8%A8%D8%AC%D8%A7%D8%A6%D8%B2%D8%A9-%D9%85%D9%86%D8%B8%D9%85%D8%A9-%D9%85%D8%B1%D8%A7%D8%B3%D9%84%D9%88%D9%86-%D8%A8%D9%84%D8%A7-%D8%AD%D8%AF%D9%88%D8%AF/a-1774501.

Welle, Deutsche. (2023). “إحالة 3 صحفيات في ‘مدى مصر’ للمحاكمة.” dw.comhttps://www.dw.com/ar/%D8%A5%D8%AD%D8%A7%D9%84%D8%A9-3-%D8%B5%D8%AD%D9%81%D9%8A%D8%A7%D8%AA-%D9%81%D9%8A-%D9%85%D8%AF%D9%89-%D9%85%D8%B5%D8%B1-%D9%84%D9%84%D9%85%D8%AD%D8%A7%D9%83%D9%85%D8%A9-%D8%A8%D8%B9%D8%AF-%D8%AA%D8%AD%D9%82%D9%8A%D9%82-%D8%B9%D9%86-%D9%85%D8%AE%D8%A7%D9%84%D9%81%D8%A7%D8%AA-%D9%85%D8%A7%D9%84%D9%8A%D8%A9-%D9%84%D8%A8%D8%B1%D9%84%D9%85%D8%A7%D9%86%D9%8A%D9%8A%D9%86/a-64851329

Wheeler, Deborah. (2017). Digital Resistance in the Middle East: New Media Activism in Everyday Life. Edinburgh: Edinburgh University Press. 

Worldometer. (2024). https://www.worldometers.info/world-population/egypt-population/

x.com. n.d. X (Formerly Twitter). https://x.com/hrw/status/1719008193366700294

Xu, Xu. (2021). “To Repress or to Co‐opt? Authoritarian Control in the Age of Digital Surveillance.” American Journal of Political Science 65(2): 309-325.

Yan, Yau Tsz. (2019). “Smart Cities or Surveillance? Huawei in Central Asia.” The Diplomathttps://thediplomat.com/2019/08/smart-cities-or-surveillance-huawei-in-central-asia.

Yenigun, Halil Ibrahim. (2021). “Turkey as a Model of Muslim Authoritarianism?” In: Routledge Handbook of Illiberalism, pp. 840-857. Routledge.

Yilmaz, Ihsan; Shahram Akbarzadeh, and Galib Bashirov. (2023). “Strategic Digital
  Information Operations (SDIOs).” Populism & Politics (P&P). European Center for
  Populism Studies (ECPS).

Yilmaz, I.; Akbarzadeh, S.; Abbasov, N. & Bashirov, G. (2024). “The Double-Edged Sword: Political Engagement on Social Media and Its Impact on Democracy Support in Authoritarian Regimes.” Political Research Quarterly, 0(0). https://doi.org/10.1177/10659129241305035

Yilmaz, I. and K. Shakil. (2025). Reception of Soft and Sharp Powers: Turkey’s Civilisationist Populist TV Dramas in Pakistan. Singapore: Palgrave Macmillan.

Yilmaz, I.; Morieson, N., & Shakil, K. (2025). “Authoritarian diffusion and sharp power through TV dramas: resonance of Turkey’s ‘Resurrection: Ertuğrul’ in Pakistan.” Contemporary Politics, 1–21.https://doi.org/10.1080/13569775.2024.2447138

Zhang, Wenxian; Ilan Alon, and Christoph Lattemann, eds. (2020). Huawei Goes Global: Volume I: Made in China for the World. Springer International Publishing.

Ziccardi, Giovanni. (2012). Resistance, Liberation Technology and Human Rights in the Digital Age, vol. 7. Springer Science & Business Media.

Dr. Ibrahim Al-Marashi—Associate Professor at Department of History, California State University, San Marcos.

Dr. Ibrahim Al-Marashi: Authoritarianism Is the New Normal and the Prevailing Norm

In this timely and thought-provoking interview, Dr. Ibrahim Al-Marashi explores how authoritarianism has become “the new normal” in the Middle East amid a global retreat from democratic norms. Speaking to the ECPS, Dr. Al-Marashi analyzes the region’s complex landscape shaped by imperial legacies, resource politics, and shifting global alliances. He highlights how populist rhetoric, digital platforms, and transactional diplomacy—especially under Trump-era politics—are empowering authoritarian leaders and weakening democratic institutions. While civil society faces mounting repression, Dr. Al-Marashi suggests that digital activism and “artivism” may offer spaces of survival and resistance. This interview provides essential insight into how populism and authoritarianism intersect in the Middle East—and what that means for the future of governance in the region.

Interview by Selcuk Gultasli

In an era marked by the erosion of liberal democratic norms and the global resurgence of authoritarian tendencies, Dr. Ibrahim Al-Marashi—Associate Professor at Department of History, California State University, San Marcos—offers a timely and incisive analysis of the Middle East’s evolving political landscape. In an in-depth interview with the European Center for Populism Studies (ECPS), Dr. Al-Marashi argues that “authoritarianism has become normalized—it’s now the prevailing norm,” particularly in a world increasingly shaped by populist and transactional leadership.

Drawing from historical legacies and contemporary global shifts, Dr. Al-Marashi underscores how imperial interference and resource wealth have long laid the groundwork for authoritarian populism in the region. “Hydrocarbons enable political elites to generate revenue without relying on taxation,” he explains, allowing regimes to distribute wealth in ways that bypass democratic accountability and reinforce autocratic control. He connects this dynamic to broader regional patterns, noting that even militant groups such as ISIS have employed populist strategies by attempting to dismantle colonial-era borders and mobilize transnational support.

Dr. Al-Marashi highlights the impact of shifting global power dynamics, particularly the rise of multipolarity and the influence of Trumpism, in undermining democratic aspirations. With the US retreating from its rhetorical commitment to democracy, populist-authoritarian leaders find renewed legitimacy. “If the US is adopting these behaviors,” he argues, “this is the new norm—this is the future.” This sets a precedent for regimes that increasingly embrace personalistic and sultanistic rule, with little concern for liberal democratic values.

Transactional diplomacy, particularly under Trump, has also reshaped regional alliances. Dr. Al-Marashi notes that such diplomacy empowers authoritarian actors like Netanyahu, while simultaneously emboldening sectarian militias and weakening traditional state structures. “It’s a double-edged sword—quite literally,” he remarks, especially when it comes to balancing regional power plays and proxy conflicts in places like Syria, Iraq, and Yemen.

While the picture appears bleak, Dr. Al-Marashi also points to the resilience of digital resistance. He suggests that civil society and democratizing efforts may survive—if not flourish—through digital activism and what he terms “artivism.” In a region where the state has often failed to provide basic services, digital spaces may serve as the last frontier for democratic imagination and mobilization.

This interview captures the complexity of a region grappling with entrenched authoritarianism amid a globally permissive environment—and offers critical insights into how populist movements and power politics intersect in the 21st century Middle East.

Here is the lightly edited transcript of the interview with Dr. Ibrahim Al-Marashi.

Imperial Legacies and Oil Wealth Laid the Foundation for Authoritarian Populism in the Middle East

Oil pump jack in the desert of Bahrain. Photo: Dreamstime.

Professor Ibrahim Al-Marashi, thank you very much for joining our interview series. Let me start right away with the first question: What historical and socio-political conditions in Iraq and the broader Middle East have laid the groundwork for the rise of populist authoritarianism in the region?

Dr. Ibrahim Al-Marashi: The obvious factors are imperial interference and hydrocarbons—oil and gas. The involvement of foreign powers, whether Britain or the US, consistently provides a convenient enemy to rally against. Meanwhile, hydrocarbons enable political elites to generate revenue without relying on taxation. This, in turn, enhances populism, as the revenues can be distributed directly through large-scale projects that bolster support for figures like Saddam Hussein—or any other authoritarian leader—not only in Iraq but across the region.

How have legacies of colonialism, militarization, and post-conflict governance contributed to the entrenchment of populist and authoritarian leadership styles in the Middle East?

Dr. Ibrahim Al-Marashi: In the case of Iraq, the British were always a convenient target to rally against. In other states ruled by France, for example, populations could similarly rally against the legacy of the colonial power. Even if you look at ISIS as a kind of populist and terrorist group, its goal of dismantling borders was an attempt to mobilize the masses—not just in the Middle East, but across the entire Muslim world. Saddam Hussein framed the invasion of Kuwait as an effort to erase borders established by British colonialism, making it a similarly convenient rallying point. And then, let’s not forget the United States. In the case of the Houthis, for instance, their appeal extends not only beyond Yemen but throughout the region, as they are perceived as one of the last groups seeking agency in a region largely shaped by US control. This is the legacy: there are concrete historical borders that have divided communities, but there is also, in the collective imagination, a persistent target around which to rally. 

Authoritarianism Has Become the New Norm—This Is the Future

From a populism perspective, how are shifting global power dynamics — especially the rise of multipolarity and the return of a nationalist, transactional Trump administration — shaping authoritarian resilience and weakening democratic aspirations in the Middle East? In what ways might these trends bolster authoritarian populist movements across the region?

Dr. Ibrahim Al-Marashi: Authoritarianism has become normalized—it’s now the prevailing norm. Even though the US has often behaved in authoritarian ways, it at least used to pay lip service to the promotion of democratic governance around the world. I think that facade has now been abandoned. As a result, populist leaders can more or less say, “Look, if the US is adopting these behaviors, this is the new norm—this is the future.”

Even in the case of Russia, there appears to be, at the very least, a personalistic rapprochement—a relationship based more on the closeness of individual leaders than shared values. The emerging regime type in this multipolar world is personalistic—what you might call sultanistic—drawing on the term “Sultan,” as used by the academic Houchang Chehabi.

If that’s the case, then there is no longer a democratic model to aspire to. This increasingly looks like the wave of the future—the future of governance.

How has the populist rhetoric of the Trump administrations—particularly their framing of “radical Islam” and their regional double standards—impacted the legitimacy of state institutions and non-state actors in Iraq and the broader Middle East?

Dr. Ibrahim Al-Marashi: In this case, there are two dynamics at play. This ties back to your earlier question about transactional foreign policy. If Trump makes a deal with Iran over its nuclear program—well, the Iraqi Shia militias are essentially mass mobilization forces for the Shia population, and much of that mobilization is supported by Iran. If Iran enters negotiations with the US, it would have less incentive to continue backing those militias. That’s one example involving non-state actors.

Then there’s the other paradox: an escalation of the war against the Houthis in Yemen. Iran might choose to rein them in, but if not, the Houthis may continue attacking Red Sea shipping as a consequence of these ongoing tensions. This illustrates how transactionalism, populism, and non-state actors intersect in the region.

Transactional Diplomacy Fuels Sectarian Populism

Shiite fighters take position in the Shia village of Al-Zahra, Syria, amid intense clashes involving Hezbollah.
Photo: Ibrahim Khader / Pacific Press.

Could the Trump administration’s emphasis on transactional diplomacy further embolden sectarian and ethnic populism in conflict zones like Iraq, Syria, Lebanon and how?

Dr. Ibrahim Al-Marashi: Not okay—on two levels. Transactional diplomacy with Netanyahu might embolden him to act unilaterally in Syria and Lebanon, and perhaps even as far as Yemen. Israel’s actions in these areas could fuel sectarianism in several ways: it could lead to a resurgence of Hezbollah in Lebanon, embolden the Houthis, and prompt Israel to use the Syrian Druze minority as a proxy. That’s one pathway through which sectarianism might be intensified.

Israel might also be emboldened to target Iraq’s Shia militias, which are part of the so-called “axis of resistance.”

On the other hand, if this transactional diplomacy were to result in a grand bargain with Iran, those same actors might be reined in.

So it’s a double-edged sword—quite literally—in terms of how this foreign policy could shape the region.

In your view, how does the militarization of politics via militias such as The Popular Mobilization Units (PMU) reflect populist strategies of political mobilization in the Middle East, especially in terms of bypassing traditional democratic institutions and appealing to ‘the people’?

Dr. Ibrahim Al-Marashi: The PMU is really a broad body of militias. A good number of them first emerged to resist the US occupation of Iraq. Initially, I don’t think it was about bypassing democratic institutions. Many were mobilized because Ayatollah Sistani was able to rally the masses in response to the ISIS threat.

The way they later contributed to undermining institutions in Iraq was by becoming a parallel force to the Iraqi military, and eventually by playing a role against the protests that called for better governance and technocratic rule.

So it’s complicated. The Popular Mobilization Units emerged in response to the occupation, later served as Iranian proxies, then fought against ISIS, and eventually remained as a force that prevented the Iraqi military from maintaining a monopoly on violence—borrowing from Max Weber’s concept.

Again, we’re at an inflection point. I think it all hinges on a potential deal with Iran: whether these militias will be reined in and subsumed into the army or the security sector, or whether they will continue to act as spoilers to Iraq’s post-conflict governance structure.

When the State Fails, Militias Become the Security Provider

Mahdi Scouts boys during a funeral ceremony in Jannata, southern Lebanon, on February 9, 2017, for a Hezbollah military commander killed in the Syrian war.
Photo: Nabil Kassir.

Considering your work on COVID-19 and militia reinvention, how do crises (like pandemics or conflicts) serve as opportunities for populist-authoritarian actors in the Middle East to entrench power under the guise of serving ‘the people’—and how might this intensify under a second Trump presidency?

Dr. Ibrahim Al-Marashi: Take the case of the pandemic—and I’ll give you an example closer to home, where I am in San Diego. When COVID hit Mexico, you had drug cartels, like those formerly under Guzmán (El Chapo), distributing medical kits—such as masks and water—to people affected by COVID-19. In other words, when the Mexican security sector failed and the health sector also failed, these non-state actors filled the void. They became both the security and health sectors.

That’s exactly what happened with COVID-19 in the Middle East—in places like Lebanon, Yemen, and Iraq. It was the Houthis, Hezbollah, and the Shia militias that were disinfecting public spaces, distributing masks, and so on. What these places have in common is the collapse of the security sector. As Max Weber said, when the state no longer holds a legitimate monopoly on violence, non-state actors step in and become both the security and health providers.

This is ultimately an indictment of the weak health sectors in those societies. But the weak health sector is a reflection of a weak security sector—you don’t have an army capable of enforcing the state’s monopoly on violence. When the state is unable to provide basic services—what we call biopower, the ability to keep the population alive—you get necropolitics instead. That’s when the state is too weak to deliver health services, and violent non-state actors—cartels or militias—step in to fill the void.

How do Middle Eastern regimes employ especially Islamist populist rhetoric domestically to justify authoritarian practices, especially in an international environment increasingly tolerant of illiberal governance?

Dr. Ibrahim Al-Marashi: Islamist rhetoric is by definition an attempt to mobilize the masses through faith. When governance fails, you turn to divine governance to justify authority and appeal to the imagination. That’s how I would see it. It’s similar to using anti-colonial rhetoric—it’s more or less an appeal to the masses. When the public has very little faith in the structures that govern them, this is where Islamist rhetoric steps in to fill the gap.

Every Power Is Backing Proxies—Democracy Is No Longer the Goal

What role do you foresee for regional actors (such as Saudi Arabia, Iran, and Turkey) playing in either reinforcing authoritarianism or providing openings for democratic movements under these new global conditions?

Dr. Ibrahim Al-Marashi: You know, in that regional mix, I would also add Israel—and this is a post-October 7th development. I’ll tell you why. In 2011, during the Arab Spring, for the first time in the region’s history, the US more or less refrained from intervening in the fate of regimes. It allowed the regime of Hosni Mubarak, a longtime ally, to fall. In that vacuum, Saudi Arabia and Iran engaged in a regional cold war, with Turkey also entering the mix. It became, in effect, a three-way conflict.

What followed was a regional cold war accompanied by counter-revolutionary dynamics. One of those counter-revolutionary tendencies eventually prevailed. Turkey, Iran, and Saudi Arabia each chose sides—supporting different counter-revolutionary or revisionist forces. These rivalries played out through proxy conflicts.

Now, after October 7th, Israel has entered the fray.

So the region today looks very different from the era of Arab Spring optimism. Every major power is backing proxies to serve its own interests. And this is especially evident in Syria.

If you want to understand how four actors are shaping Syria’s future: Turkey is deeply invested in the current Syrian government; Israel is working to expand its presence; Saudi Arabia is wiping away Syria’s debts; and Iran is trying to preserve the influence it has lost. None of these four powers are interested in a transition to democratic governance in Syria. All are focused on maintaining their respective spheres of influence. In that sense, each is likely to reinforce autocratic tendencies. They are more inclined to back warlords as proxies than to support any meaningful democratic transition.

Given the historical reliance of Middle Eastern authoritarian regimes on external patrons, how might a US foreign policy under Trump 2.0 reshape alliances, especially with regimes facing internal legitimacy crises?

Dr. Ibrahim Al-Marashi: I think the best case in point is how close the Trump administration was to Saudi Arabia. So it might try, in a second term, a strategy of offshore balancing—essentially carving out spheres of influence in a multipolar system and telling Saudi Arabia: “We’re not really concerned about your human rights issues, but you maintain order in the Gulf.”

The US would provide as many weapons as needed, and of course, Trump would say, “You have to pay for them,” to boost his standing domestically. But the message would be: it’s your job to be the policeman in the Gulf. That’s what I mean by offshore balancing.

The same approach would likely apply to Israel. That doesn’t bode well for the future of Palestinian governance, and Saudi Arabia would have little incentive to address human rights issues—as long as it continues to receive a blank check from Washington.

Authoritarianism Is Not Just Tolerated—The Masses Are Seen to Want It

Protest march in Beirut against Lebanon bombing by Israel. Photo: Sadık Gulec.

Could the erosion of liberal democratic norms in the West, accelerated by populist leaders like Trump, provide ideological “cover” for Middle Eastern populist-authoritarian leaders? How?

Dr. Ibrahim Al-Marashi: Absolutely—especially now that the US doesn’t even go through the motions of paying lip service to human rights.

From the perspective of international relations theory—specifically constructivism—a new norm has been constructed: not only can authoritarian governance be tolerated, but the masses actually want it. It’s no accident that the masses elected someone like Trump. Or, to go further, take the case of El Salvador—you have another kind of authoritarian-populist leader who is more or less aligned with the Trump administration’s approach.

And I think that’s become a model for the rest of the world. Regimes can now say: not only does the US want strongman leadership, but you—the people—want it too. Because a strong hand gets things done.

In what ways might regional populist movements exploit global discourses of “national sovereignty” and “anti-globalism,” championed by Trumpism, to consolidate authoritarian rule?

Dr. Ibrahim Al-Marashi: When you talk about discourses and global dynamics, there’s an important element here called digital populism. All of this is enabled because politics now also occurs on a digital plane. More or less, digital platforms have become a way for authoritarian regimes to bypass traditional media structures and appeal directly to the masses—especially in cases where traditional media has not yet been fully co-opted by authoritarian leaders. So, to answer your question, digital populism is the key. It’s the mechanism through which these discourses become normalized and reach mass audiences.

Exclusion, Not Sectarianism, Is the Real Threat

Given the weakening of traditional international pressure for democratization, do you foresee populist movements in the Middle East mutating toward more overt forms of sectarianism, ethno-nationalism, or exclusionary politics?

Dr. Ibrahim Al-Marashi: I would say more toward exclusionary politics. And here’s why: if we look at this in terms of ethno-sectarianism, I wonder if the region has been exhausted by those challenges. Let me explain what I mean. At one point, the so-called “axis of resistance” included Persian Twelver Shia Iran; Arab Twelver Shia militias in Iraq; an Arab Alawite regime in Syria; Arab Twelver Shia Hezbollah; Zaydi Shia Houthis in Yemen; and Arab Sunni Islamist groups like Islamic Jihad and Hamas. Of course, that axis of resistance has been dealt a very heavy blow in recent years. But the fact that such an ideologically diverse coalition could form makes me question whether the ethno-sectarian frame has been over-fetishized. There are other, more complex realities on the ground.

I think ethnic and sectarian identities are securitized—that is, they are instrumentalized when convenient for those in power, and then abandoned when such divisions no longer serve political interests. So, if that’s the case, I see the trajectory more in terms of exclusionary politics. Populism becomes a mask to mobilize the masses—but always at the expense of issue-based politics and inclusive governance. Those who are excluded often include civil society actors, journalists, and ethnic minorities, for example.

In a context where Western powers show declining interest in promoting democracy abroad, is there still space for bottom-up democratization efforts in the Middle East, or are we entering a phase of entrenched authoritarianism?

Dr. Ibrahim Al-Marashi: I don’t think they’re mutually exclusive. We are indeed facing entrenched authoritarianism, but I would also say—thinking back to digital populism—that if authoritarianism is being entrenched through digital means, then perhaps bottom-up approaches can also survive through digital spaces.

I’m thinking, for example, of the digital hacktivist collective Anonymous. During the Arab Spring, when various regimes tried to crush protests, Anonymous hacked into state systems in support of the protesters. That’s just one example.

Because, of course, ideas can’t be killed, right? And the one sphere that hasn’t been fully subsumed by the state is still the digital realm. I think that’s where these democratic ideas and efforts can continue to exist.

Does that necessarily translate into on-the-ground resistance? That has yet to be seen. But at this particular inflection point, I believe that’s where the ideas will, at the very least, find refuge.

If Silence Is Spreading in the US, Imagine How Much Worse It Is in the Middle East

And finally, Professor Al-Marashi, considering the weakening of global democratic norms, how can civil society actors in the Middle East adapt their strategies for resistance and survival amid a more authoritarian-friendly international environment?

Dr. Ibrahim Al-Marashi: Again, I refer to my previous answer. I think, at the end of the day, these groups might survive digitally. They’ll be able to organize online. But to be honest, if you look at how the region has been transformed since 2011, it does not look good. So many of these actors are barely surviving.

At the end of the day, the ideas might persist—through digital activism, through art, through artivism.

But I’m speaking from the US, where even here, the ability to speak openly—on campuses, for example—is being threatened. If I can sense a wave of silence coming here, I can only imagine how much worse it must be in the Middle East.

Social Media

How Identity Shapes Perception in a Polarized World: Insights from an Online Survey Experiment with AI-Enhanced Media

Who do people trust in politics, and why? Our online survey experiment reveals that trust and credibility are driven less by emotional victimization narratives and more by partisanship. Political messages resonate most when they align with the audience’s ideological beliefs, overshadowing the impact of emotional appeals. These findings highlight the power of identity in shaping perceptions and the challenges of bridging partisan divides in today’s polarized landscape. Tailored messaging that speaks to shared values remains key to building trust and engagement.

By  Ihsan Yilmaz, Ana-Maria Bliuc & Daniel S. Courtney*  

Introduction: A New Battleground for Ideas

In today’s hyperconnected world, the arena of political debate has shifted from parliaments and rallies to the digital stage of social media. Here, every post has the potential to build trust or spark outrage, amplifying voices and emotions in ways that redefine public discourse (Huszár et al., 2021; Yarchi et al., 2020). Our study examines this dynamic space, exploring how political affiliation and narratives of victimhood shape perceptions of credibility and emotional engagement.

At the heart of this research lies a fundamental question: “How do people decide whom to trust?” Furthermore, we ask, “How do identity and emotion shape these judgments?” To explore these questions, we conducted an experiment that reflects the digital realities of political communication. Using AI-generated posts, participants were exposed to messages from representatives of two ideologically distinct UK political parties: the right-wing populist Reform UK and the progressive Green Party. Some posts portrayed the communicators as victims of political persecution, while others focused solely on party platforms. By combining new technology with real-world political dynamics, the study examines the intersection of emotion, trust, and identity. As social media increasingly becomes the dominant arena for political persuasion (De Zúñiga et al., 2022a, 2022b), understanding how these factors influence public opinion is essential for addressing the challenges of modern democracy.

The Online Survey Experiment

Imagine scrolling through your social media feed, where political messages compete for attention amid a sea of hashtags and soundbites. This study aimed to replicate that environment by exposing participants to custom-designed posts that mirrored the type of content people encounter daily on platforms like X. After engaging with these posts, participants provided their reactions through a survey, enabling us to measure two critical factors: the extent to which they trusted the messages and their emotional responses.

The experiment was designed to explore the relationship between political affiliation and emotional appeals. To achieve this, we introduced a fictional candidate representing either the Green Party or Reform UK. The fictional candidate either portrayed themselves as a victim of political persecution or focused solely on communicating their party’s agenda, avoiding any mention of personal hardship. This deliberate design allowed us to investigate how the intersection of identity politics, emotional narratives, and party alignment shapes perceptions of trust and credibility in the digital age.

By simulating the dynamics of online political discourse, the study offers insights into how emotional and ideological cues influence the way people perceive and engage with political messaging. In a world where social media serves as the primary battleground for political persuasion, understanding these mechanisms is more critical than ever.

Trust and Emotions in A Polarized World

In the polarized world of politics, trust and credibility are often elusive goals. Our preliminary findings offer intriguing insights into what makes political messages resonate—or fail—depending on the audience. Messages framed as coming from a Green Party candidate consistently inspired higher trust and credibility compared to those attributed to a Reform UK candidate, particularly among our predominantly left-leaning sample. This trend held true regardless of whether victimization narratives were employed. In essence, partisanship outweighs narrative.

Interestingly, our findings suggest that victimization narratives—often a powerful emotional tool in political rhetoric—didn’t significantly influence trust or credibility within the same political frame. Instead, political ideology appears to be an effective barrier to rhetorical strategy, lending credence to the idea of partisans being stuck in echo-chambers, preventing them from accepting opposing messages.

But there’s another layer to this story: political alignment. For left-leaning individuals, the Green Party’s message was consistently rated as more trustworthy and credible than Reform UK’s, across all scenarios. On the other hand, right-leaning individuals were less swayed by the policy frame, though they showed a slight preference for Reform when victimization narratives were included. This dynamic highlights how deeply our ideological beliefs shape the way we perceive political communication, with certain rhetorical strategies being effective for some but not others.

These findings carry a potentially important lesson for political strategists and communicators. Tailoring messages to match the values and priorities of a target audience is not just effective—it’s essential. While the Green Party’s approach seemed to appeal broadly, Reform struggled to build credibility, especially among left-leaning individuals. Furthermore, it seems that relying on victimization narratives alone may not be enough to shift perception among those not already susceptible to such rhetoric. Ultimately, it’s more likely that the strength of the policy message and its alignment with the audience’s worldview could make the real difference.

These preliminary findings are interesting because they remind us that politics isn’t just about policies; it’s about people. To win trust and build credibility, politicians need to understand the hearts and minds of those they seek to persuade. And that means crafting messages that resonate not only with their base but with the broader public.

Implications for Politics and Polarization

In an era of deepening political divides, understanding the dynamics of trust and credibility in communication is essential. This study highlights a key insight: the identity of the messenger often outweighs the content of the message. For political actors, this reality carries significant implications, particularly in creating narratives that resonate across ideological lines.

Reassessing Victimhood Narratives

While victimhood is often portrayed as a powerful rhetorical device (Armaly & Enders, 2021; Hronešová & Kreiss, 2024), this research suggests that its impact on trust and credibility may be overstated, particularly when the core message lacks alignment with the audience’s values. Although victimization can evoke empathy, it is not a panacea for overcoming ideological divides. Rather than relying on emotional appeals, politicians must recognise that the framing of their policies—and their alignment with the audience’s worldview and expectations—plays a far more critical role in shaping (positive) perceptions.

The Polarizing Force of Echo Chambers

The findings also bring to attention the risks of echo chambers. People are far more likely to trust narratives that align with their political affiliations, reinforcing a selective feedback loop that limits exposure to diverse perspectives. This dynamic can deepen polarisation, narrowing opportunities for meaningful dialogue (Bliuc et al., 2024). As trust becomes a partisan commodity, the gap between ideological groups grows, making it increasingly difficult to foster a shared sense of reality.

The Emotional Costs of Division

The emotional divide between ideological camps is another sobering takeaway. Messages from one’s political in-group are often met with admiration and pride, while those from the out-group trigger anger and distrust. This emotional schism exacerbates societal divisions, perpetuating cycles of antagonism (Harteveld, 202; Whitt et al., 2020). Over time, these entrenched emotional responses weaken the potential for compromise, dialogue, and understanding.

A Broader Context of Political Messaging

Beyond immediate political debates, the study hints at broader social trends, including the role of nostalgia and perceived cultural threats. Those who feel their heritage or values are under siege may be particularly vulnerable to messaging framed around urgency or loss. In this context, the study shows a troubling tendency in modern politics: the weaponization of emotion (see also Hidalgo-Tenorio & Benítez-Castro, 2021). When political narratives prioritize emotional impact over substantive discussion, the space for genuine policy debate diminishes.

Concluding Remarks: Toward a Healthier Democracy

If we are to support a more informed and less polarized democracy, we must move beyond strategies that merely stoke division. This begins with critical media literacy: citizens must learn to question not only the content of political messages but also the emotional appeals embedded within them. For political leaders, the challenge lies in promoting narratives that resonate without exploiting emotions or deepening divides. Authenticity, truthfulness, and a commitment to civil disagreement should guide political communication.

Overall, political narratives are likely to achieve their greatest impact when they resonate strongly with a partisan audience by aligning with its values, beliefs, and identity. Such tailored appeals foster emotional engagement and reinforce shared purpose among supporters. Without this initial alignment, narratives will likely fail to mobilize the base or sustain its commitment.

At its heart, this research reflects an increasing complexity of the relationship between emotion, trust, and political identity. It reminds us that our perceptions of credibility are shaped as much by how we feel as by what we hear. To navigate the volatile waters of modern politics, we must prioritize intellectual humility and create spaces for open dialogue. Only by bridging ideological divides can we build a society where trust transcends partisan loyalties, paving the way for a more inclusive and informed democracy.


 

Funding: This work was supported by the Australian Research Council [ARC] under Discovery Grant [DP220100829], Religious Populism, Emotions and Political Mobilisation and ARC [DP230100257] Civilisationist Mobilisation, Digital Technologies and Social Cohesion and Gerda Henkel Foundation, AZ 01/TG/21, Emerging Digital Technologies and the Future of Democracy in the Muslim World.


 

(*) Daniel Sebastian Courtney is a PhD candidate in Psychology at the University of Dundee, focusing on the impact of publicly sharing opinions on overconfidence. His research interests include social media behavior, conspiracy ideation, misinformation, polarization, and nationalism. He holds an MSc in Developmental Psychology from the University of Dundee, with a dissertation on nonlinguistic context effects on reading times in social media posts, and an MA in Applied Linguistics from the University of Birmingham, where he explored metaphor and metonymy in English and Japanese. Courtney has published on vaccine hesitancy and collective action, and has extensive teaching experience in psychology and English, including positions at Meiji Gakuin, Sophia, Obirin, and Josai International universities in Japan.


References

Armaly, M., & Enders, A. (2021). ‘Why Me?’ The Role of Perceived Victimhood in American Politics. Political Behavior, 44, 1583 – 1609. https://doi.org/10.1007/s11109-020-09662-x.

Bliuc, A. M., Betts, J. M., Vergani, M., Bouguettaya, A., & Cristea, M. (2024). A theoretical framework for polarization as the gradual fragmentation of a divided society. Communications Psychology2(1), 75. https://doi.org/10.1038/s44271-024-00125-1

De Zúñiga, H., González-González, P., & Goyanes, M. (2022a). Pathways to Political Persuasion: Linking Online, Social Media, and Fake News With Political Attitude Change Through Political Discussion. American Behavioral Scientist.https://doi.org/10.1177/00027642221118272.

De Zúñiga, H., Marné, H., & Carty, E. (2022b). Abating Dissonant Public Spheres: Exploring the Effects of Affective, Ideological and Perceived Societal Political Polarization on Social Media Political Persuasion. Political Communication, 40, 327 – 345. https://doi.org/10.1080/10584609.2022.2139310.

Harteveld, E. (2021). Fragmented foes: Affective polarization in the multiparty context of the Netherlands. Electoral Studies. https://doi.org/10.1016/J.ELECTSTUD.2021.102332.

Hidalgo-Tenorio, E., & Benítez-Castro, M. (2021). Trump’s populist discourse and affective politics, or on how to move ‘the People’ through emotion. Globalisation, Societies and Education, 20, 86 – 109. https://doi.org/10.1080/14767724.2020.1861540.

Hronešová, J., & Kreiss, D. (2024). Strategically Hijacking Victimhood: A Political Communication Strategy in the Discourse of Viktor Orbán and Donald Trump. Perspectives on Politics. https://doi.org/10.1017/s1537592724000239.

Husz’ar, F., Ktena, S., O’Brien, C., Belli, L., Schlaikjer, A., & Hardt, M. (2021). Algorithmic amplification of politics on Twitter. Proceedings of the National Academy of Sciences of the United States of America, 119. https://doi.org/10.1073/pnas.2025334119.

Whitt, S., Yanus, A., Mcdonald, B., Graeber, J., Setzler, M., Ballingrud, G., & Kifer, M. (2020). Tribalism in America: Behavioral Experiments on Affective Polarization in the Trump Era. Journal of Experimental Political Science, 8, 247 – 259. https://doi.org/10.1017/XPS.2020.29.

Yarchi, M., Baden, C., & Kligler-Vilenchik, N. (2020). Political Polarization on the Digital Sphere: A Cross-platform, Over-time Analysis of Interactional, Positional, and Affective Polarization on Social Media. Political Communication, 38, 98 – 139. https://doi.org/10.1080/10584609.2020.1785067.

A scene from the International Conference on Populisms, Digital Technologies, and the 2024 Elections in Indonesia, hosted at Deakin University.

International Conference on Populisms, Digital Technologies, and the 2024 Elections in Indonesia

 

DOWNLOAD CONFERENCE BOOKLET

 

We, as ECPS, are excited to share the videos of thought-provoking sessions from the International Conference on Populisms, Digital Technologies, and the 2024 Elections in Indonesia, hosted by Deakin University in collaboration with the European Center for Populism Studies (ECPS), Universitas Indonesia, and Universitas Gadjah Mada. This event, funded by the generous support of the Australian Research Council (ARC), the Alfred Deakin Institute for Citizenship and Globalization (ADI), and ECPS, offers invaluable insights into Indonesia’s dynamic political landscape following its multi-level elections.

Held at the Alfred Deakin Institute, the conference provided a platform to discuss the evolving roles of populism, digital technology, AI, disinformation, religion, and socio-political forces shaping Indonesia’s democratic discourse. Over two engaging days, 31 papers were presented across eight panels, each diving into specific aspects of populism—from Gender and Youth to the impacts of Sharp Power, Disinformation, and Cancel Culture, and from Populist Strategy and Communication to Authoritarianism and Islamist Populism.

The conference also featured keynote addresses from esteemed scholars, Professor Simon Tormey, an expert in populism theory, and Professor Vedi Hadiz, known for his work on Islamic populism in Indonesia, bringing depth and perspective to this vital topic.

Don’t miss the opportunity to engage with these groundbreaking discussions—watch the full conference videos, thanks to the support of ARC, ADI, and ECPS.

 

Welcoming Speech

By Dr. Fethi Mansouri (Deakin Distinguished Professor, the Founding Director of the Alfred Deakin Institute for Citizenship and Globalisation at Deakin University, Australia.) 

 

Keynote Speech

“The Populism Puzzle – Sociological Approach,” by Dr. Simon Tormey (A Political Theorist and the Executive Dean of Arts and Education at Deakin University in Australia). 

 

Panel 1: Populism and Gender

 

Panel 2: Populist Strategy and Communication

 

Panel 3: Youth

 

Panel 4: Sharp Power, Disinformation, Cancel Culture

 

Panel 5: Authoritarianism

 

Panel 6: Populist Rhetoric

 

 

 

Donald Trump and Elon Musk on the X social media platform. Photo: Rokas Tenys.

Professor Nownes: Ceding Too Much Power to Tech Giants Poses a Threat to Democracy

Highlighting Elon Musk’s dual role as a private tech mogul and a potential quasi-governmental leader under elected US President Donald Trump, Professor Anthony J. Nownes underscored the dangers of unregulated private power intersecting with public institutions. He emphasized that ceding excessive power to any private interest—whether in the tech industry or another sector—poses a significant threat to democracy. Illustrating this concern, Professor Nownes pointed to the proposed “Doge Department,” noting, “Unlike actual government departments with conflict-of-interest rules, such private entities lack safeguards, making them a potential avenue for unchecked influence over public resources.”

Interview by Selcuk Gultasli

In an illuminating discussion with the European Center for Populism Studies (ECPS), Professor Anthony J. Nownes, a political science expert from the University of Tennessee and co-author of the book titled The New Entrepreneurial Advocacy -Silicon Valley Elites in American Politics, offered his insights on the growing influence of tech elites and its implications for democracy. Centering on the theme of the delicate balance between private power and public accountability, Professor Nownes emphasized a pressing concern: “Ceding too much power to any private interest—whether the tech industry or any other sector—poses a threat to democracy.”

Highlighting Elon Musk’s dual role as a private tech mogul and a potential quasi-governmental leader under elected US President Donald Trump, Professor Nownes pointed out the dangers of unregulated private power intersecting with public institutions. He explained, for instance, the risks of the proposed “Doge Department” (or Department of Government Efficiency), stating that “unlike actual government departments with conflict-of-interest rules, such private entities lack safeguards, making them a potential avenue for unchecked influence over public resources.”

Turning to the broader historical context, Professor Nownes compared today’s tech moguls to past industrial giants. While corporate influence is not a new phenomenon, he argued that the tech industry’s vast resources and rapid innovation—outpacing government regulation—make its impact unique. Using examples like Microsoft protecting Ukraine from cyberattacks and SpaceX ensuring Ukrainian connectivity, Professor Nownes highlighted how tech companies wield unprecedented power over geopolitical and societal outcomes.

On the issue of lobbying and political advocacy, Professor Nownes delved into the disproportionate focus of Silicon Valley philanthropy on post-material causes, such as environmental conservation and DEI (Diversity, Equity, and Inclusion), rather than structural inequalities. He warned that this prioritization risks sidelining critical issues like income inequality and homelessness, leaving a vacuum often filled by populists like Donald Trump, who, while lacking substantive solutions, at least address these concerns rhetorically.

Professor Nownes also discussed the erosion of public trust in tech companies, exacerbated by scandals such as Cambridge Analytica. Referencing a Pew study that found 78% of Americans believe social media companies wield too much political power, he noted that despite this skepticism, tech giants have not yet faced significant political or economic repercussions. However, he foresees this changing, particularly as ethical considerations—such as the negative effects of social media on children—gain political traction.

Professor Nownes also addressed the future of American democracy under a second Trump administration. While cautiously optimistic about its survival, he acknowledged the erosion of democratic norms and the slow response of legal institutions to recent challenges. His reflections offer a sobering reminder of the delicate equilibrium between private power and public accountability, as well as the need for vigilance in preserving democratic principles in the face of rapid technological and political change.

Professor Anthony J. Nownes is a political science expert from the University of Tennessee and co-author of the book titled The New Entrepreneurial Advocacy – Silicon Valley Elites in American Politics.

Here is the transcription of the interview with Professor Anthony J. Nownes with some edits.

Tech Titans Shape Public Discourse by Spotlighting Key Issues

Professor Nownes, thank you very much for joining our interview series. Let me start right away with the first question. How do you view the growing influence of tech elites in shaping political agendas? Are they effectively becoming a new form of political aristocracy? How has the concentration of economic power among tech giants influenced the balance of political power in the United States? Could you discuss whether their dominance undermines or enhances democratic institutions?

Professor Anthony J. Nownes: First of all, thank you for having me here today. I appreciate the opportunity. I’m glad you phrased the first part of your question the way you did, focusing on agendas rather than policy outcomes. This distinction is important. There’s no question that tech elites shape the political agenda. Let’s start with Elon Musk. He’s the wealthiest man on earth and commands significant media attention for almost anything he does. Beyond that, his direct involvement in media platforms like Twitter—now X—and others like Instagram amplifies his influence. His posts, or whatever they’re called now, and his public statements certainly affect which issues people think about.

This doesn’t necessarily mean people agree with him, but it does mean they see what he says and often recognize the issues he highlights as important. Elon Musk is not alone in this regard. Other tech elites—Mark Zuckerberg, Reid Hoffman, Tim Cook, and many others—also have massive social media followings. While they may not always achieve their desired policy outcomes, there’s no doubt that the issues they publicly engage with are those that garner significant public attention. In this way, they have considerable success in shaping the political agenda.

Now, regarding the second part of your question about the concentration of economic power, these are, of course, challenging questions to answer definitively. Speaking both as a scholar and a citizen, I would argue that whenever the government cedes too much power to private actors, it risks undermining democracy. The government should and must work with private actors—after all, in a capitalist system, the economy’s health depends largely on the private sector’s vitality. But the government has its own role here, and at least theoretically, that role is to look out for the rest of us. I believe that ceding too much power to any private interest—whether the tech industry or any other sector—poses a threat to democracy. To demonstrate this, let me highlight some of the perils of granting excessive power to private actors.

Take, for example, the so-called “Doge Department.” You may not be familiar with this, but it’s the quasi-governmental body Donald Trump has claimed he has already begun forming. Officially called the Department of Government Efficiency (I use air quotes because it’s not actually a government department), it’s essentially a quasi-governmental—or really, a non-governmental—organization. Trump has reportedly chosen Elon Musk and Vivek Ramaswamy to head it, with the stated goal of making the government more efficient.

Real government institutions, agencies, and bureaucratic departments operate under strict rules and regulations. These rules dictate who they can hire, what sorts of behavior are and are not allowed in the workplace, the qualifications required for employment, and, crucially, who the department is accountable to.

Now, imagine this organization gets up and running. Suppose, within six months, Trump grants it actual power. There would be little to stop someone like Elon Musk from making decisions that, for example, ensure his companies receive lucrative government contracts while his competitors do not. Unlike actual government departments, which have conflict-of-interest rules and similar safeguards, a private, non-governmental organization like this lacks such mechanisms.

This is one of the clearest examples of what could go wrong when excessive power is given to private actors within a democratic system. It underscores the importance of maintaining strict oversight and clear boundaries between public institutions and private entities to preserve the integrity of democratic governance.

Power of Tech Giants Today Is Unprecedented Compared to Past Corporate Interests

Facebook CEO Mark Zuckerberg in Press conference at VIVA Technology (Vivatech) the world’s rendezvous for startup and leaders in Paris, France on on May 24, 2018. Photo:
Frederic Legrand.

Looking at the historical relationship between corporate power and politics, how does the role of hi-tech oligarchs compare to past industrial moguls in shaping American political landscapes? Is this a continuation of corporate influence, or does the unique nature of digital platforms present new challenges?

Professor Anthony J. Nownes: For the first part of your question, I’d like to preface my response by acknowledging that I’m not a historian, so I hesitate to draw extensive comparisons between current tech oligarchs and past industries in American politics. That said, it’s certainly not unprecedented for a powerful industry to wield significant influence over political outcomes in this country. 

For instance, every school kid in the US learns about the robber barons of the Gilded Age. Additionally, the tobacco industry wielded extraordinary political power for decades, successfully staving off serious regulation of tobacco products. Throughout US history, doctors, the insurance industry, and other healthcare providers have collectively spent immense amounts of money lobbying against socialized medicine, with considerable success. So, corporate influence in politics is nothing new—it has been a feature of the American Republic from its very beginning.

However, I think it’s worth noting that the tech industry is different in several respects from previous industries that wielded political power. One key difference is the almost unfathomable resources these companies possess. As an industry and even at the individual company level, tech entities have significantly more wealth and resources than many nation-states. This is unprecedented.

Another difference is the rapid pace of innovation within the tech industry, which often outpaces the ability of governments and regulatory agencies to keep up. For example, SpaceX is currently more capable than the US government when it comes to space exploration. Similarly, Alphabet (Google) is far ahead of the US government—and likely any other government—in developing and deploying artificial intelligence. This gives tech companies tremendous influence over our lives, even if that influence is not overtly political.

I believe the rise of the tech industry introduces challenges that are different from those posed by previous corporate powers—some of which we may not even fully understand yet. For instance, consider the war in Ukraine. Tech companies are not directly involved in the conflict, yet they are significantly affecting events on the ground. Microsoft, for example, protects Ukraine from cyberattacks. SpaceX ensures that Ukrainians remain connected to the Internet. I recently read that Google has removed images of Ukraine from its open-source maps. These actions, while not traditionally political, have a profound impact on real-world political and international events. In this sense, the power of the tech industry over people’s lives is unprecedented compared to the influence wielded by previous corporate interests. 

‘Leave Us Alone’ Ethos Shapes Platforms and Policies

How do hi-tech firms’ lobbying activities compare to other industries in terms of expenditure and focus? Specifically, what does the dominance of issues like taxes, intellectual property, and technology indicate about their priorities in shaping US policy? How does this concentrated influence by a few tech giants affect policymaking transparency and public interest considerations?

Professor Anthony J. Nownes: I certainly understand why both the media and ordinary people are focusing on tech lobbying and the political influence of big tech. However, I think it’s important to recognize that the tech industry is just one of many industries in this country that spend hundreds of millions of dollars every year attempting to influence policy and elections.

There are other perennial heavyweight industries that spend on a similar scale to the tech industry. For example, the pharmaceutical industry, the health insurance industry, securities and investment companies, and the oil and gas industry are all highly politically active and spend significant sums. In that sense, the tech industry is not fundamentally different from other high-profile, politically active industries.

As for your final question, I found it interesting the way you phrased it. I study what we call public interest groups or non-governmental organizations in this country, which are comprised of individual members. At this point, there simply aren’t many public interest groups—or what we might also call citizen groups—working on the opposite side of the issues that big tech is pushing.

In many other industries, there are countervailing groups. For instance, in the oil and gas industry, there are hundreds of environmental groups in the United States. While they don’t have the same resources as oil and gas companies, they’ve managed to achieve a number of political victories over the past several decades. Similarly, healthcare and pharmaceutical companies often contend with public interest groups—especially senior citizen organizations—that lobby against them on issues like the cost of prescription drugs and government programs. Currently, I don’t see many public interest groups or citizen groups actively working to counterbalance the power of big tech. This, I believe, is another way in which this corporate sector is somewhat unusual.

What role do tech oligarchs play in shaping public discourse, and how do their personal ideologies influence the policies and practices of their platforms? Are we witnessing a new form of political lobbying through algorithmic curation and platform management?

Professor Anthony Nownes: This question seems almost perfectly shaped to refer to Elon Musk. Certainly, his personal ideology seems to affect every aspect of his newest company, X. I think there’s an element of this influence among other tech moguls as well.

It’s a cliché, but I believe it’s accurate to say that many of these individuals, even those who have traditionally supported center-left or left causes and the Democratic Party, are at their core economic libertarians. They are libertarians on social issues as well, but their general ethos of “leave us alone and let us do what we want” seems to permeate how they run their companies. I’m particularly thinking here of platforms like Facebook, Instagram, YouTube, and X. It doesn’t take much time spent on these platforms to realize that a fairly libertarian ethos influences what happens on them.

As for your second question, I’m not entirely sure I know enough about algorithms and platform management to say much definitively. However, I can say this: it seems to me that the conservative criticism these companies faced during the first Trump administration did affect some of their practices. For example, this criticism likely influenced content moderation policies, decisions to label certain material as misinformation or disinformation, and determinations about who to platform and who to de-platform. So, I do think there is some evidence that algorithmic curation and platform management are having political effects.

Social Media Companies Contribute Significantly to Misinformation Epidemic

Given the rise of misinformation and polarization on social media platforms, do you believe tech companies bear responsibility for mitigating these issues, or should this be addressed through government regulation? How do we balance such regulation with the principles of free speech? 

Professor Anthony J. Nownes: I’m not sure this is exactly how the question was intended—but I’ll answer it this way regardless, more as a citizen than as a scholar. Social media companies absolutely bear some responsibility for the explosion of misinformation and disinformation in this country. Of course, they don’t see it that way, but I think the evidence is overwhelming that they have contributed significantly to the epidemic of misinformation and disinformation in the US and elsewhere.

No matter how one feels about government regulation, it seems to me that there’s really only one entity in this country large enough, powerful enough, and well-resourced enough to rein in these companies: the federal government. The EU, of course, also has the capacity to impose regulations. However, these companies have shown very little commitment to addressing misinformation and disinformation on their own, so I see the idea of self-regulation as a bit of a nonstarter.

As for the free speech aspect of the issue, I don’t think balancing regulation with free speech is particularly difficult. We already do it all the time in other domains—for example, with tobacco advertising. I think the free speech defense offered by social media companies to justify their conduct is, frankly, somewhat nonsensical. We regulate many things in society without infringing on people’s rights to express themselves or act within legal boundaries.

Do you think the political donations and lobbying efforts by Silicon Valley’s tech executives disproportionately sway policy outcomes? Are there examples where their influence has significantly impacted legislation or political campaigns? With federal campaign finance laws being described as “byzantine and ever-changing,” what challenges do these laws pose in regulating the contributions of tech leaders, and how can these challenges be addressed without infringing on free speech rights?

Professor Anthony J. Nownes: Before addressing this question directly, I want to point out something that may already be familiar to many of you: for those of us who study interest groups, corporate influence, or lobbying, determining influence is incredibly difficult. The primary reason is the old adage: correlation does not equal causation.

For example, in the US, we see the gun lobby making significant contributions to right-leaning politicians, who then work diligently to maintain access to firearms. However, this doesn’t necessarily prove influence because these politicians likely would have acted the same way without the gun lobby’s financial support. Indeed, that alignment is often why the gun lobby supports them in the first place. As a result, proving policy influence is challenging, and at best, we can make educated guesses based on available evidence.

That said, it’s clear to me that the tech industry, like many others, has been highly influential politically. One prominent example is the tech sector’s campaign to preserve Section 230 of the Communications Decency Act. This legislation protects online platforms from being treated as publishers, granting them virtual legal immunity for content posted on their sites—a unique advantage in US law. Social media companies have invested significant time and resources at both the state and federal levels to ensure Section 230 remains intact.

Another notable example is Proposition 22 in California. Uber and Lyft spent substantial sums to secure their exemption from labor laws in one of the country’s most liberal states. Similarly, big tech firms, including Amazon, have successfully resisted legislation aimed at increasing transparency about how user data is utilized. On the individual level, tech leaders like Peter Thiel have played pivotal roles in the political ascendance of figures such as J.D. Vance.

As for the second part of your question about campaign finance laws, I think it’s essential for people to realize that campaign finance laws in this country, as they are currently configured, really can’t stop an individual or organization from pouring as much money as they want into our campaign finance system. Yes, there are regulations, and yes, these regulations can and do prevent the ultra-rich and well-resourced organizations from donating money directly to candidates for office. However, the way the laws are currently structured—and I don’t see this changing anytime soon—there is nothing the government or anyone else can do to stop a person or organization from spending unlimited sums of money to support candidates or parties they favor. 

For example, Elon Musk reportedly contributed something between $200 and $300 million to help Trump get elected. He’s not allowed to give that money directly to Trump, as the amount he can donate directly to a candidate is severely limited. But he is allowed to give that money to a Super PAC. In this case, he contributed to his own Super PAC, “America PAC.” All he had to do was hire one or two competent lawyers to ensure they followed the letter of the law, and there was nothing to stop him from funneling unlimited sums of money into the election. 

I see no evidence at all that either major political party has any appetite to change anything about the current system. As such, I view the question of regulating this kind of spending as rather moot. I do not see any significant reform in this area on the horizon.

Most People Are Getting All Their News from Podcasters

You discuss “super citizens” leveraging their wealth and public profiles to influence policy through media and social platforms. How do you see this form of direct advocacy evolving, especially with the growing influence of social media as an unmediated channel?

Professor Anthony J. Nownes: I’m not particularly skilled at predicting the future of politics, and I don’t feel I know enough about technology to provide a deeply insightful answer to this question. However, after reflecting on it, I can offer the following observation: One trend my co-author, Darren Halpin, and I have noted regarding the concept of “super citizens” is that an increasing number of people—particularly younger individuals, and especially younger men—are receiving all of their news, not just part of it, from individuals who are not traditionally part of the news industry. Figures like Joe Rogan and Theo Vaughn, for instance, are immensely popular podcasters who exemplify what we term prototypical super citizens. These individuals initially gained fame through non-political activities but now wield considerable political influence through their podcasts.

I think the extent to which people rely on these sources for news and information is somewhat underappreciated. As traditional or legacy media continues to decline in importance, and in some cases disappears altogether, I believe we’re going to see much more of this phenomenon. Unfortunately, as a result, disinformation and misinformation are likely to become even bigger problems moving forward.

Your book suggests that Silicon Valley philanthropy tends to favor postmaterial causes, such as environmental conservation and arts, over redistributive efforts that address economic inequality. What implications does this trend have for addressing structural inequalities in American society?

Professor Anthony J. Nownes: The first part of the question addresses disproportionate funding for certain issues. A good example here is education. Our research showed that Silicon Valley figures and their foundations have spent considerable amounts of money over the past couple of decades on what they call education reforms—initiatives such as charter schools, privatization, and voucher schemes. There’s substantial evidence that this advocacy, and in some cases direct funding, has influenced state policies and school districts across the United States. This demonstrates how Silicon Valley’s prioritization of certain issues over others can have significant impact, though it remains challenging to definitively prove causation.

Regarding the disproportionate focus on post-material issues, the implications are far-reaching. This emphasis on post-material causes means that critical problems in the US, such as income inequality, homelessness, underemployment, poverty, and inadequate access to healthcare, are not prioritized in political discourse. To be a liberal in the 1930s meant focusing on the day-to-day economic interests of ordinary people. Today, however, left-leaning Silicon Valley elites often concentrate on issues like abortion, DEI (Diversity, Equity, and Inclusion) programs, LGBTQ rights, and global warming. While these issues are undeniably important, this shift has left economic concerns largely to right-wing populists like Donald Trump.

Although Trump does not approach these issues with serious policy solutions, he, at least, acknowledges them, which resonates with voters. The center-left’s overwhelming focus on post-material issues has been disastrous for the working class and has, in part, enabled the rise of Trump and other MAGA Republicans.

Regarding current political tendencies, there’s no question that some high-profile tech figures—Elon Musk being a prime example—have aligned themselves with Trump and the right. Others, such as Mark Zuckerberg and Jeff Bezos, appear to have softened their rhetoric toward Trump, even if they may not have supported him outright. However, voting and campaign finance records suggest that Silicon Valley employees, including both rank-and-file workers and many executives, remain largely Democratic and liberal-leaning.

I think that some high-profile names have definitely turned toward Trump. However, I don’t believe they have changed that much. It’s politics that has changed significantly. For example, even though many Silicon Valley employees—particularly the rank-and-file employees—haven’t changed much in their political tendencies, they are certainly more silent than they were 8 or 10 years ago.

I think some of the rhetoric coming from the tech titans—the entrepreneurs, owners, and founders—stems from sheer pragmatism. They understand Trump as a political reality, and this time, they want to position themselves favorably. As for employees, they see how the world has changed and likely feel there’s little reason to engage in protests, as it probably wouldn’t make a significant difference. Additionally, such actions could potentially get them into trouble at work. So, that was a bit of a rambling answer, but that’s my perspective.

People Believe Social Media Companies Wield Too Much Political Power

Elon Musk, founder, CEO, and chief engineer of SpaceX; CEO of Tesla; CTO and chairman of X (formerly Twitter); and co-founder of Neuralink and OpenAI, at VIVA Technology (Vivatech) in Paris, France, on June 16, 2023. Photo: Frederic Legrand.

Given the discussion on the erosion of public trust in tech firms due to scandals like Cambridge Analytica, what role do you think transparency and ethical considerations should play in maintaining the political capital of these companies?

Professor Anthony J. Nownes: I think it’s quite interesting. Given how much influence tech companies wield and how closely Donald Trump has aligned himself with Elon Musk, public opinion polls in this country clearly show that the vast majority of Americans are skeptical or even negative about tech companies. For example, in my lobbying class, I reference a Pew study from earlier this year that revealed 78% of Americans believe social media companies wield too much political power. To me, this is an astonishing figure.

Despite this widespread skepticism, these companies haven’t yet paid a significant political or economic price. However, I believe this may be starting to change, potentially influenced by recent political shifts among some tech leaders. What do I mean by this? Over the past couple of years, somewhat quietly, multiple states in the US have passed age verification laws for pornographic websites. While this development hasn’t garnered much media attention, I suspect social media companies are paying close attention. They may be wondering if similar regulations could soon target them, particularly given the growing discourse about the harmful effects of social media on children.

For instance, Jonathan Haidt’s highly successful book The Anxious Generation discusses these negative effects, particularly on children, and I think this conversation is beginning to permeate our political discourse. As a result, tech companies will likely need to start addressing the ethical considerations you mentioned. This growing dialogue and the precedent set by regulations on other industries might push tech companies to pay more attention to these issues in the near future.

And lastly, Professor Nownes, there are those pundits arguing that American democracy may not survive another Trump administration. How do you think American institutions will react to a second Trump administration?

Professor Anthony J. Nownes: Well, this is a tough question. For both professional and personal reasons, I’ll say this: Do I think American democracy will survive? Yes, I do. But what it will look like a few years from now? I honestly don’t know. I see some disturbing signs, particularly regarding democratic norms. Many of these norms have taken quite a hit over the last few years. The legal system, for example, has been quite slow in addressing certain actions, especially attempts by the president-elect to change the outcome of the last election. I think this remains an open question. I wish I had a more definitive answer, but at this point, I just don’t know.

Dr. Paul Levinson, Professor of Communication & Media Studies at Fordham University.

Professor Levinson: Elon Musk Must Choose Between Government Role and Control of X

Highlighting the dangers of overlapping corporate and governmental powers, Professor Paul Levinson cautioned, “I am deeply opposed to having the person who owns X also hold a high-ranking government position. That kind of overlap means the government could end up controlling communication platforms.” He elaborated on Musk’s ethical responsibility, stating that if Musk were a “true believer in free speech,” he would either divest from X or refuse a government post. However, Levinson expressed skepticism: “I think we both know he’s likely to do neither.” Levinson also voiced his deep concern for American democracy under a potential second Trump administration, describing it as “the worst threat to our democracy since the Civil War.”

Interview by Selcuk Gultasli

In a riveting interview with the European Center for Populism Studies (ECPS), Dr. Paul Levinson, Professor of Communication & Media Studies at Fordham University, discussed pressing concerns about the intersection of technology, politics, and democracy. Professor Levinson’s insights are especially timely, given Elon Musk’s rising influence as the owner of X (formerly Twitter) and his potential role in a second Trump administration. Highlighting the dangers of overlapping corporate and governmental powers, Professor Levinson cautioned, “I am deeply opposed to having the person who owns X also hold a high-ranking government position. That kind of overlap means the government could end up controlling communication platforms.”

Professor Levinson elaborated on Musk’s ethical responsibility, stating that if Musk were a “true believer in free speech,” he would either divest from X or refuse a government post. However, Professor Levinson expressed skepticism: “I think we both know he’s likely to do neither.”

Throughout the interview, Professor Levinson addressed the broader implications of concentrated power in technology. Despite concerns about billionaires like Musk or the owners of Facebook, Levinson pointed out that their influence has not yet stifled democratic impulses. “Social media provides a unique platform for individuals to disseminate the truth widely, even as it enables lies and fascism,” he noted, striking a balance in his evaluation.

On the issue of disinformation and algorithms, Professor Levinson argued that the negative impact of these technologies is often overstated. He acknowledged their role in targeted advertising, referencing Facebook’s data-sharing with Cambridge Analytica during the 2016 US election. However, he emphasized, “The blame lies not with the algorithms themselves but with the disinformation they are used to spread.”

Professor Levinson’s critique of governmental overreach was particularly sharp. Drawing historical parallels, he warned, “When governments gain such control, they can jeopardize democratic systems, even those that have existed for hundreds of years.” He cited the Thatcher administration’s suppression of unfavorable news during the Falklands War as a case study in the dangers of government-controlled communication.

Reflecting on Trump’s weaponization of “fake news,” Professor Levinson described it as a hallmark of fascism, akin to tactics used by Stalin and Hitler. He lamented, “It amazes me how many people have fallen for this tactic, despite the lessons we should have learned from history.”

Professor Levinson shared his deep concern for American democracy under a potential second Trump administration, describing it as “the worst threat to our democracy since the Civil War.” From absurd appointments to calculated assaults on institutions, Professor Levinson’s insights underline the precarious state of democratic governance in the digital age.

Here is the transcription of the interview with Professor Paul Levinson with some edits.

Democratic Impulses Persist Despite Billionaires’ Control Over Social Media

Illustration by Ulker Design.

Professor Levinson, thank you so very much for joining our interview series. Let me start right away with the first question. How do you perceive the influence of hi-tech oligarchs, such as Elon Musk, on the digital public sphere? Does the concentration of digital platforms in the hands of a few individuals pose a unique threat to democratic discourse? 

Professor Paul Levinson: Let me answer the second part of your question first. Everything new in communications can potentially threaten a democratic society. However, so far in our history—both the history of the United States and the history of democracies in general—new forms of communication have largely benefited democracy. In fact, they have often undermined dictatorships, autocracies, and oligarchies.

A notable example I often cite is the White Rose group in Germany during World War II. This courageous group of college students used a primitive Xerox machine to disseminate the truth about Nazi atrocities to the German public. Their efforts have always left a profound impression on me. Another example is from the final decade of the Soviet Union in the 1980s. There was something called Samizdat Video, a primitive video technology by today’s standards, but it was instrumental in undermining the autocracy of the Soviet regime, even under Gorbachev, who was probably the most enlightened Soviet leader.

With this historical perspective in mind, while I am always concerned about new technologies, I don’t believe social media presents an insurmountable threat to democracy. In fact, it cuts both ways. Social media enables lies, fascism, and the suppression of truth, which are central to fascistic systems. At the same time, social media provides a unique platform for individuals to disseminate the truth widely.

Now, regarding Elon Musk and other billionaires like those controlling Facebook, despite their unprecedented control over social media platforms, this has not yet prevented democratic impulses from finding expression through these platforms. 

The Negative Impact of Algorithms and AI Is Often Overrated

How do you address concerns about the unchecked power of tech companies to shape public discourse, especially when their decisions significantly influence political narratives? In what ways do algorithms on social media platforms amplify populist narratives, and how much responsibility should platform owners like Musk take for the political polarization these technologies can create?

Professor Paul Levinson: First, we’ve heard a lot about algorithms, and more recently, about AI. I think the negative impact of these technologies is often overrated. One area where algorithms have proven particularly effective is targeted advertising. This was evident during the 2016 election in the United States when Facebook provided Cambridge Analytica with detailed data about users—what they were sharing, liking, and discussing on the platform. This data allowed the Trump campaign—who, in this regard, were ahead of the Democrats in recognizing its potential—to tailor their ads to specific audiences. For instance, the ads weren’t wasted on someone like me, who wouldn’t have voted for Trump under any circumstances because I already understood him for what he was.

This approach overcame one of the limitations of traditional advertising, where ads are broadcast to a wide audience via television, newspapers, or billboards, with no way to ensure they reach the right people. A significant portion of the ad spend is wasted because many viewers or readers are not the intended target audience. Algorithms, on the other hand, allowed for precision targeting, which made advertising far more efficient in this context.

The use of such algorithms in 2016, which allowed Facebook to share user data, is something that should be and has been controlled to some extent in the United States by agencies like the Federal Trade Commission. Preventing social media platforms from selling user data is an important step, and it does not interfere with free speech or the First Amendment.

As for algorithms spreading disinformation, the blame lies not with the algorithms themselves but with the disinformation they are used to disseminate. This raises the question of what can and should be done about disinformation on platforms like Twitter—now known as X—and other social media outlets.

Let me introduce an important concept here. In the United States, the First Amendment has never been intended, nor can it be used, to protect criminal communication. For example, if a group uses social media to plan a bank robbery, kidnapping, or murder, that communication is not protected. The government has a vested interest in preventing crimes before they occur.

So, the question is, what are the algorithms spreading? If they are spreading deliberate lies—such as disinformation about COVID-19—that result in harm or death, I believe that constitutes a crime and must be stopped. However, if they are spreading statements like, “Oh, we love Donald Trump! He was such a great President,” even though I strongly disagree with that sentiment, it is still acceptable. That is simply a part of the democratic system.

Do you believe governments or international bodies should regulate hi-tech oligarchs to prevent potential misuse of their platforms for political manipulation? If so, what should such regulations prioritize?

Professor Paul Levinson: This is another central topic. The real question here is: which is worse—the enormous power held by corporations and oligarchs, or governments regulating them?

The reason I frame it this way is that Trump has repeatedly made it clear that, if he returns to office, he plans to target cable media, broadcast media, and social media platforms that, in his distorted view, are spreading lies about him. For Trump, anyone who criticizes him is accused of delivering fake news and lying. He’s essentially attempting to flip the narrative.

The critical difference between the power held by the government and that wielded by massive corporations or billionaires like Elon Musk is that the government controls the military. In my view, this is the most significant threat to democratic systems. Trump has also spoken about using the National Guard to break up protests and take other actions that represent substantial steps toward establishing a fascist state in the United States.

While I don’t like billionaires having so much power, what concerns me even more is the government having the ability to stop communication and prevent people from sharing their ideas—whether or not I agree with those ideas—in the public sphere for others to read and comment on.

Once the government starts regulating communication, it’s a very short step to punishing dissent, arresting people, and throwing them in jail—exactly what the Nazis did in the 1930s. That’s a road I’m deeply concerned about.

Counter Lies with Truth, Not Suppression

Illustration: Shutterstock.

Digital technologies have been tools for both democratic and populist movements. In your opinion, how can society harness these technologies to strengthen democratic values while mitigating their misuse by authoritarian populist leaders?

Professor Paul Levinson: This is a very long-standing issue. John Milton addressed it 400 years ago in his Areopagitica tract, where he argued for keeping the marketplace of ideas open. Milton believed that allowing both truth and falsity to exist in the same marketplace enables people to identify the truth and distinguish it from lies.

When you start regulating what can enter that marketplace, the government—or anyone trying to regulate it—could easily make a mistake or even deliberately suppress the truth while presenting it as false. This prevents people from making rational decisions. That, again, is what fascists do—they attempt to control the public sphere. By keeping the truth out of the public sphere, they can masquerade as truth-tellers while propagating lies.

Much more recently, here in the United States, one of the greatest Supreme Court justices in history, Louis Brandeis—so influential that a university in Massachusetts was named after him, Brandeis University—expressed a similar idea. Brandeis famously said that the best way to combat a lie is not to suppress it but to counter it with the truth. That’s how you destroy lies—by presenting the truth clearly and rationally.

Of course, some people are hopeless; no matter what you say, they won’t change their minds. But I’m an optimist and believe that most human beings are rational. Like John Milton and Louis Brandeis, I think the best way forward is to keep the marketplace of ideas as open as possible. This openness allows the truth to emerge and shine a light on the lies.

A Clear Line Must Be Drawn When Speech Leads to Criminal Activity or Endangers Lives

With Elon Musk’s vision of Twitter as a “public square” open to all opinions, how should social media platforms navigate the tension between upholding free speech and preventing the spread of harmful disinformation? How should actors like Musk balance their personal ideologies with their ethical responsibilities toward maintaining a fair and inclusive digital space?

Professor Paul Levinson: Well, again, the first question has to be addressed by considering whether the communication in question constitutes criminal activity. Are lives put in jeopardy because of such communication? If the answer is yes, then that communication should not be allowed on any platform.

The challenge, of course, lies in defining what constitutes criminal communication. Consider the example of Trump and the attack on the U.S. Capitol on January 6, 2021, which he incited after losing the 2020 election. Trump has since been indicted in multiple cases for criminal activity related to that attack. However, he maintains his innocence, and tragically, if he were to regain the presidency, he could potentially ensure that these cases are dismissed-a deeply unfortunate prospect.

That said, the Capitol attack was, in my view, unequivocally a criminal activity. The individuals involved were not patriots; they were part of a group that believed they could overturn the results of a democratically conducted election through violence, including threats to hang the Vice President for allowing the certification of electoral votes.

First, we must establish a consensus on what constitutes a crime. For example, during a pandemic that has already claimed millions of lives, deliberately spreading lies and deceiving the public about false cures is a clear case of criminal activity. In such instances, figures like Elon Musk have an ethical obligation to prevent this content from being shared on their platforms. If they fail to act, I believe the government has a duty to intervene to stop such harmful communication.

This brings us to the debate on the limits of free speech. Elon Musk presents himself as an absolutist regarding free speech, and we can certainly debate how far I or anyone else leans toward free speech absolutism. Personally, I draw a clear line when speech leads to criminal activity or endangers human lives. It is not difficult to identify such communications online, and when Musk fails to remove this kind of content, I believe he is culpable.

In such cases, the government—though certainly not under Trump, as he and Musk appear to be allies—has a responsibility to engage with Musk and press him to adopt more responsible policies.

Government Intervention in Communication Is Far More Dangerous

U.S. President-elect Donald Trump at a rally for then-VP nominee J.D. Vance in Atlanta, GA, on August 3, 2024. Photo: Phil Mistry.

You argue that it’s concerning that tech executives can exercise so much power over who can use their platforms. But the alternative – government intervention – could be much worse. You argued this before Elon Musk was appointed to a significant post in the second Trump administration. Do you still think the same?

Professor Paul Levinson: Yes, because, as I mentioned, the government wields military power. While corporations can be problematic, and it is undeniably concerning for the richest person in the world to hold so much power that they can essentially do whatever they want—even if they lose millions of dollars and still remain the wealthiest—it is far more dangerous for the government to be involved in communication.

Let me give you another example of this—a relatively minor one, but still important. Some people may remember the Falklands War in the 1980s. Argentina wanted the United Kingdom to relinquish control of the Falkland Islands, which are located off Argentina’s coast. Understandably, Argentina questioned why the UK was still holding on to these islands, which they had seized during the colonial era.

At that time, Margaret Thatcher was the Prime Minister of the UK. She wanted to project toughness and refused to give up the islands, leading to war. The BBC, the British Broadcasting Corporation, unlike media systems in the United States, is not independent of the government. It is part of the British government, and naturally, it reported on the war.

One day, the Argentine forces inflicted significant damage on the British Expeditionary Force in the Falklands. The British government, under Thatcher, didn’t want the British public to know about this, fearing it would provoke public outrage. So, they instructed the BBC not to broadcast or report the news.

This demonstrates the immense power of governments, even in democracies like the United Kingdom. The government effectively told the nation’s primary broadcasting organization, “Don’t report that.” This is precisely the kind of government overreach that concerns me here in America and across Western democracies, where fascist tendencies have been gaining ground.

When governments gain such control, they can jeopardize democratic systems, even those that have existed for hundreds of years. This is why I continue to believe that government intervention in communication is far more dangerous than the unchecked power of tech executives.

Violating the Spirit of the First Amendment Is Not as Severe as Violating the First Amendment Itself

You declare yourself a First Amendment radical, i.e., a staunch supporter of the First Amendment, which says Congress shall make no law abridging free speech. Yet, you have supported Twitter’s ban on Donald Trump. Don’t you think there is a contradiction between these two positions? Where should the ethical line be drawn for social media platforms when balancing freedom of expression with the risk of harm caused by certain types of speech?

Professor Paul Levinson: First of all, I’d like to draw a distinction between the First Amendment itself and what I call the spirit of the First Amendment.

The First Amendment says, “Congress shall make no law abridging freedom of speech or the press.” Through the 14th Amendment, which was enacted after the Civil War in the 1800s, this prohibition on federal government interference with communication was extended to state governments and, in general, to municipalities, including cities. Over the years, the Supreme Court has correctly ruled that no government can interfere with communications—again, unless it involves some kind of criminal activity. That’s the First Amendment.

Now, let’s take an example like the Grammy Awards. These awards, given for the best music in a given year, are broadcast on American television stations like CBS. During a rap artist’s performance, where cursing and vulgarity are often part of the genre, viewers might hear bleeps censoring certain words. What’s happening there? CBS is bleeping those words because they fear their sponsors might object, or that the Federal Communications Commission (FCC) might penalize them by refusing to renew their license.

For the record, I believe the FCC is unconstitutional because it violates the First Amendment—it’s a government agency that interferes with communication. Nevertheless, CBS’s actions, while cowardly in my opinion, do not violate the First Amendment. Instead, they violate the spirit of the First Amendment because CBS is not the government.

Similarly, when Elon Musk or, before him, the previous owners of Twitter banned Donald Trump from the platform, they were not acting as representatives of the government. In Trump’s case, his tweets were rightly perceived as contributing to the instigation of the attack on the Capitol in January 2021—a criminal activity. For this reason, I believe banning him from the platform was the correct decision. However, this action was taken by a private social media company, not the government. As such, while it may have violated the spirit of the First Amendment, it did not violate the First Amendment itself.

In general, my position is that the spirit of the First Amendment should be respected, as censorship is rarely beneficial. However, violating the spirit of the First Amendment is not as severe as violating the First Amendment itself.

To illustrate a clear violation of the First Amendment, consider when President Richard Nixon attempted to prevent The New York Times and The Washington Post from publishing the Pentagon Papers. Nixon argued that publishing the papers would undermine his war effort in Vietnam. Fortunately, the Supreme Court correctly ruled that such an action would violate the First Amendment and voted against Nixon, affirming that a US president cannot impose restrictions on what newspapers can publish. This case represents a classic and correct application of the First Amendment.

The Danger of Elon Musk Holding Power in Both Government and Social Media

Elon Musk, founder, CEO, and chief engineer of SpaceX; CEO of Tesla; CTO and chairman of X (formerly Twitter); and co-founder of Neuralink and OpenAI, at VIVA Technology (Vivatech) in Paris, France, on June 16, 2023. Photo: Frederic Legrand.

You suggest that market forces can effectively counterbalance the dominance of tech giants, as seen with Microsoft’s decline in influence. Do you believe similar market corrections are plausible for current tech behemoths like Twitter or Amazon, given their role as gatekeepers of global communication?

Professor Paul Levinson: Yes, I do. Let’s go back to what I was saying about Microsoft. This happened in the 1990s when Microsoft was at its peak, and Bill Gates was probably the richest man in the world. There was a lot of talk about breaking up Microsoft—claims that it had a monopoly, too large a market share, and that this dominance was unhealthy for the intellectual and economic well-being of the country.

Even back then, I said, “Take it easy.” The market will regulate itself; there’s no need to rush into breaking up the Microsoft corporate system. People were reacting to something that had only happened in the last year or two. I suggested we wait and see what would happen. Sure enough, by the late 1990s and into the 21st century, Microsoft’s influence had already started to decline, and new giants like Amazon were beginning to grow.

Once again, I am more concerned about the government regulating any communication system than I am about the damage caused by such systems. Consider Donald Trump returning to the White House—he’s already naming some of the bizarre people (and that’s putting it kindly) he plans to appoint to important positions in his cabinet and administration.

The last thing I want to see is a scenario where the government goes after MSNBC, an important progressive voice in cable television, or NBC as a whole, claiming they have too much power and must be broken up. That kind of government intervention poses a greater threat to democracy than allowing corporate systems to continue operating.

Now, I’m not saying I’m thrilled about the power Elon Musk holds. In fact, I need to emphasize this point: Trump has stated he wants to put Musk in charge of a new government agency tasked with making the government more efficient. While I’m all for making the government more efficient, I am deeply opposed to having the person who owns X (formerly Twitter) also hold a high-ranking government position. That kind of overlap means the government could end up controlling communication platforms.

As for Musk, I’m not overly concerned about most of the things he’s done so far. What does concern me is the idea of him simultaneously being a member of the new administration and maintaining his powerful position at X. If Musk were a true believer in free speech, he would either divest himself of X or refuse the government post. But I think we both know he’s likely to do neither.

Projection Is a Hallmark of Fascism

You argue that Donald Trump turned the concept of “fake news” into a tool to undermine legitimate media. What long-term impact do you think this has on public trust in journalism and the democratic process? 

Professor Paul Levinson: It’s already had a very negative effect, and it’s one of the worst things Donald Trump has done. I remember watching television back in January 2017, shortly after Trump had been elected president in the 2016 election. As president-elect, he was holding a news conference here in New York City. At the end of the conference, reporters raised their hands to ask questions.

A prominent CNN reporter, Jim Acosta, raised his hand, and Trump looked at him and said, “I’m not going to call on you. You’re with CNN, right? You’re fake news.” I remember thinking, “Wow, that’s a pretty clever thing Trump is trying to do.”

CNN was not spreading fake news in any way. It was truthfully reporting on things that made Trump look bad. For Trump, however, anything that embarrasses or criticizes him is automatically labeled as “fake news.” Whether the idea originated with Trump or one of his advisers, it’s a brilliant but dangerous way of undermining criticism.

This tactic reflects what Sigmund Freud called projection. When we look at the world and disagree with someone, we project our own intentions onto them, accusing them of doing what we plan to do. This, in turn, justifies actions against them. Projection is a hallmark of fascism. It’s something Hitler did. It’s something Stalin did. Stalin referred to the press as the “enemy of the people,” which is another favorite term of Trump. In Nazi Germany, during the 1930s, Joseph Goebbels popularized the term Lügenpresse, meaning “lying press”—essentially, fake news.

What amazes me is how many people have fallen for this tactic in 2024, and indeed, over the past decade, despite the lessons we should have learned from the 1930s. Unfortunately, it highlights just how ignorant many people are of history.

The Greatest Threat to American Democracy Since the Civil War

How do you think American people and American institutions will react to second Trump administration?

Professor Paul Levinson: I don’t know, and I have to tell you, I am deeply concerned. I think the United States of America is facing the worst threat to our democracy since the Civil War.

The election results obviously surprised and stunned a lot of people. I’ll just note, parenthetically, that once again, the polls were off. They predicted a razor-close race. While Trump didn’t win by a landslide, he did secure an impressive victory. Even here in New York State, where the Democrats won, they did so by a smaller margin than Joe Biden or even Hillary Clinton had achieved.

This election revealed a significant aspect of American life and I thought that many, including myself, didn’t fully recognize before the election. It’s a deeply troubling realization. As historians know, it’s not as though Germany had an autocratic system in place before Hitler’s rise to power. The Weimar Republic was actually a strong democracy with a robust constitution.

Fascism often doesn’t seize power through a coup d’état—though that can ultimately happen—but rather by undermining democratic systems and turning them against themselves. That’s what makes this such a deeply concerning time.

I’m an optimist, so I hope that the worst won’t happen. But at this point, it just remains to be seen.

Trump’s Appointments Are Not Just Concerning, They Border on Absurdity

Independent presidential candidate Robert F. Kennedy introduced his running mate, Nicole Shanahan, during a campaign event in Oakland, California, on Tuesday, March 26, 2024. Photo: Maxim Elramsisy.

And lastly, Professor Levinson, there are those who are deeply concerned about the future of American democracy under a second Trump administration. Some argue that American democratic institutions may not survive. Where do you stand in this debate?

Professor Paul Levinson: Well, as I just said, I’m very worried. During Trump’s first administration, many of the people he appointed seemed to operate under the mistaken belief that, while Trump might be a little unhinged, they could keep him in check. They thought they knew what was right and would steer him accordingly. Trump’s response to that? He fired anyone who disagreed with him.

He famously dismissed James Comey, the FBI director, and Rex Tillerson, his Secretary of State. Trump became infamous for firing people, both in his presidency and on The Apprentice. This time around, however, he’s being much more calculated in his appointments.

The only person he has appointed so far who, in my view, is not completely unfit for the role is Marco Rubio, a senator from Florida who is now Secretary of State. While I don’t agree with Rubio’s policies, at least he’s not irrational. Unfortunately, the same cannot be said for many of Trump’s other appointees.

For example, Matt Gaetz, recently appointed Attorney General, was until recently a member of the House of Representatives. He resigned to take this post despite being the subject of an investigation involving allegations of sex trafficking, including minors. The idea of someone with such a history holding the top legal position in the country is deeply troubling.

Then there’s Dr. Mehmet Oz. Yes, he’s an MD, but he hasn’t practiced medicine in years and is better known as a television personality. He’s been appointed to lead the CDC or a similar health organization—it’s hard to keep track.

Or take Robert F. Kennedy Jr., who has been appointed Secretary of Health. While he’s Robert F. Kennedy’s son, his anti-vaccine stance goes against the very measures that saved millions of lives during the COVID pandemic. These appointments are not just concerning; they border on absurdity.

At this point, I’m holding out hope that the Senate, which is currently split 50-50 between Democrats and Republicans, might reject some of these nominees. However, it’s unclear whether that will happen. I don’t have a crystal ball, but if I did, I’d see nothing but clouds and stormy weather ahead. Unfortunately, I can’t see through the storm.