Dr. Paul Levinson, Professor of Communication & Media Studies at Fordham University.

Professor Levinson: Elon Musk Must Choose Between Government Role and Control of X

Highlighting the dangers of overlapping corporate and governmental powers, Professor Paul Levinson cautioned, “I am deeply opposed to having the person who owns X also hold a high-ranking government position. That kind of overlap means the government could end up controlling communication platforms.” He elaborated on Musk’s ethical responsibility, stating that if Musk were a “true believer in free speech,” he would either divest from X or refuse a government post. However, Levinson expressed skepticism: “I think we both know he’s likely to do neither.” Levinson also voiced his deep concern for American democracy under a potential second Trump administration, describing it as “the worst threat to our democracy since the Civil War.”

Interview by Selcuk Gultasli

In a riveting interview with the European Center for Populism Studies (ECPS), Dr. Paul Levinson, Professor of Communication & Media Studies at Fordham University, discussed pressing concerns about the intersection of technology, politics, and democracy. Professor Levinson’s insights are especially timely, given Elon Musk’s rising influence as the owner of X (formerly Twitter) and his potential role in a second Trump administration. Highlighting the dangers of overlapping corporate and governmental powers, Professor Levinson cautioned, “I am deeply opposed to having the person who owns X also hold a high-ranking government position. That kind of overlap means the government could end up controlling communication platforms.”

Professor Levinson elaborated on Musk’s ethical responsibility, stating that if Musk were a “true believer in free speech,” he would either divest from X or refuse a government post. However, Professor Levinson expressed skepticism: “I think we both know he’s likely to do neither.”

Throughout the interview, Professor Levinson addressed the broader implications of concentrated power in technology. Despite concerns about billionaires like Musk or the owners of Facebook, Levinson pointed out that their influence has not yet stifled democratic impulses. “Social media provides a unique platform for individuals to disseminate the truth widely, even as it enables lies and fascism,” he noted, striking a balance in his evaluation.

On the issue of disinformation and algorithms, Professor Levinson argued that the negative impact of these technologies is often overstated. He acknowledged their role in targeted advertising, referencing Facebook’s data-sharing with Cambridge Analytica during the 2016 US election. However, he emphasized, “The blame lies not with the algorithms themselves but with the disinformation they are used to spread.”

Professor Levinson’s critique of governmental overreach was particularly sharp. Drawing historical parallels, he warned, “When governments gain such control, they can jeopardize democratic systems, even those that have existed for hundreds of years.” He cited the Thatcher administration’s suppression of unfavorable news during the Falklands War as a case study in the dangers of government-controlled communication.

Reflecting on Trump’s weaponization of “fake news,” Professor Levinson described it as a hallmark of fascism, akin to tactics used by Stalin and Hitler. He lamented, “It amazes me how many people have fallen for this tactic, despite the lessons we should have learned from history.”

Professor Levinson shared his deep concern for American democracy under a potential second Trump administration, describing it as “the worst threat to our democracy since the Civil War.” From absurd appointments to calculated assaults on institutions, Professor Levinson’s insights underline the precarious state of democratic governance in the digital age.

Here is the transcription of the interview with Professor Paul Levinson with some edits.

Democratic Impulses Persist Despite Billionaires’ Control Over Social Media

Illustration by Ulker Design.

Professor Levinson, thank you so very much for joining our interview series. Let me start right away with the first question. How do you perceive the influence of hi-tech oligarchs, such as Elon Musk, on the digital public sphere? Does the concentration of digital platforms in the hands of a few individuals pose a unique threat to democratic discourse? 

Professor Paul Levinson: Let me answer the second part of your question first. Everything new in communications can potentially threaten a democratic society. However, so far in our history—both the history of the United States and the history of democracies in general—new forms of communication have largely benefited democracy. In fact, they have often undermined dictatorships, autocracies, and oligarchies.

A notable example I often cite is the White Rose group in Germany during World War II. This courageous group of college students used a primitive Xerox machine to disseminate the truth about Nazi atrocities to the German public. Their efforts have always left a profound impression on me. Another example is from the final decade of the Soviet Union in the 1980s. There was something called Samizdat Video, a primitive video technology by today’s standards, but it was instrumental in undermining the autocracy of the Soviet regime, even under Gorbachev, who was probably the most enlightened Soviet leader.

With this historical perspective in mind, while I am always concerned about new technologies, I don’t believe social media presents an insurmountable threat to democracy. In fact, it cuts both ways. Social media enables lies, fascism, and the suppression of truth, which are central to fascistic systems. At the same time, social media provides a unique platform for individuals to disseminate the truth widely.

Now, regarding Elon Musk and other billionaires like those controlling Facebook, despite their unprecedented control over social media platforms, this has not yet prevented democratic impulses from finding expression through these platforms. 

The Negative Impact of Algorithms and AI Is Often Overrated

How do you address concerns about the unchecked power of tech companies to shape public discourse, especially when their decisions significantly influence political narratives? In what ways do algorithms on social media platforms amplify populist narratives, and how much responsibility should platform owners like Musk take for the political polarization these technologies can create?

Professor Paul Levinson: First, we’ve heard a lot about algorithms, and more recently, about AI. I think the negative impact of these technologies is often overrated. One area where algorithms have proven particularly effective is targeted advertising. This was evident during the 2016 election in the United States when Facebook provided Cambridge Analytica with detailed data about users—what they were sharing, liking, and discussing on the platform. This data allowed the Trump campaign—who, in this regard, were ahead of the Democrats in recognizing its potential—to tailor their ads to specific audiences. For instance, the ads weren’t wasted on someone like me, who wouldn’t have voted for Trump under any circumstances because I already understood him for what he was.

This approach overcame one of the limitations of traditional advertising, where ads are broadcast to a wide audience via television, newspapers, or billboards, with no way to ensure they reach the right people. A significant portion of the ad spend is wasted because many viewers or readers are not the intended target audience. Algorithms, on the other hand, allowed for precision targeting, which made advertising far more efficient in this context.

The use of such algorithms in 2016, which allowed Facebook to share user data, is something that should be and has been controlled to some extent in the United States by agencies like the Federal Trade Commission. Preventing social media platforms from selling user data is an important step, and it does not interfere with free speech or the First Amendment.

As for algorithms spreading disinformation, the blame lies not with the algorithms themselves but with the disinformation they are used to disseminate. This raises the question of what can and should be done about disinformation on platforms like Twitter—now known as X—and other social media outlets.

Let me introduce an important concept here. In the United States, the First Amendment has never been intended, nor can it be used, to protect criminal communication. For example, if a group uses social media to plan a bank robbery, kidnapping, or murder, that communication is not protected. The government has a vested interest in preventing crimes before they occur.

So, the question is, what are the algorithms spreading? If they are spreading deliberate lies—such as disinformation about COVID-19—that result in harm or death, I believe that constitutes a crime and must be stopped. However, if they are spreading statements like, “Oh, we love Donald Trump! He was such a great President,” even though I strongly disagree with that sentiment, it is still acceptable. That is simply a part of the democratic system.

Do you believe governments or international bodies should regulate hi-tech oligarchs to prevent potential misuse of their platforms for political manipulation? If so, what should such regulations prioritize?

Professor Paul Levinson: This is another central topic. The real question here is: which is worse—the enormous power held by corporations and oligarchs, or governments regulating them?

The reason I frame it this way is that Trump has repeatedly made it clear that, if he returns to office, he plans to target cable media, broadcast media, and social media platforms that, in his distorted view, are spreading lies about him. For Trump, anyone who criticizes him is accused of delivering fake news and lying. He’s essentially attempting to flip the narrative.

The critical difference between the power held by the government and that wielded by massive corporations or billionaires like Elon Musk is that the government controls the military. In my view, this is the most significant threat to democratic systems. Trump has also spoken about using the National Guard to break up protests and take other actions that represent substantial steps toward establishing a fascist state in the United States.

While I don’t like billionaires having so much power, what concerns me even more is the government having the ability to stop communication and prevent people from sharing their ideas—whether or not I agree with those ideas—in the public sphere for others to read and comment on.

Once the government starts regulating communication, it’s a very short step to punishing dissent, arresting people, and throwing them in jail—exactly what the Nazis did in the 1930s. That’s a road I’m deeply concerned about.

Counter Lies with Truth, Not Suppression

Illustration: Shutterstock.

Digital technologies have been tools for both democratic and populist movements. In your opinion, how can society harness these technologies to strengthen democratic values while mitigating their misuse by authoritarian populist leaders?

Professor Paul Levinson: This is a very long-standing issue. John Milton addressed it 400 years ago in his Areopagitica tract, where he argued for keeping the marketplace of ideas open. Milton believed that allowing both truth and falsity to exist in the same marketplace enables people to identify the truth and distinguish it from lies.

When you start regulating what can enter that marketplace, the government—or anyone trying to regulate it—could easily make a mistake or even deliberately suppress the truth while presenting it as false. This prevents people from making rational decisions. That, again, is what fascists do—they attempt to control the public sphere. By keeping the truth out of the public sphere, they can masquerade as truth-tellers while propagating lies.

Much more recently, here in the United States, one of the greatest Supreme Court justices in history, Louis Brandeis—so influential that a university in Massachusetts was named after him, Brandeis University—expressed a similar idea. Brandeis famously said that the best way to combat a lie is not to suppress it but to counter it with the truth. That’s how you destroy lies—by presenting the truth clearly and rationally.

Of course, some people are hopeless; no matter what you say, they won’t change their minds. But I’m an optimist and believe that most human beings are rational. Like John Milton and Louis Brandeis, I think the best way forward is to keep the marketplace of ideas as open as possible. This openness allows the truth to emerge and shine a light on the lies.

A Clear Line Must Be Drawn When Speech Leads to Criminal Activity or Endangers Lives

With Elon Musk’s vision of Twitter as a “public square” open to all opinions, how should social media platforms navigate the tension between upholding free speech and preventing the spread of harmful disinformation? How should actors like Musk balance their personal ideologies with their ethical responsibilities toward maintaining a fair and inclusive digital space?

Professor Paul Levinson: Well, again, the first question has to be addressed by considering whether the communication in question constitutes criminal activity. Are lives put in jeopardy because of such communication? If the answer is yes, then that communication should not be allowed on any platform.

The challenge, of course, lies in defining what constitutes criminal communication. Consider the example of Trump and the attack on the U.S. Capitol on January 6, 2021, which he incited after losing the 2020 election. Trump has since been indicted in multiple cases for criminal activity related to that attack. However, he maintains his innocence, and tragically, if he were to regain the presidency, he could potentially ensure that these cases are dismissed-a deeply unfortunate prospect.

That said, the Capitol attack was, in my view, unequivocally a criminal activity. The individuals involved were not patriots; they were part of a group that believed they could overturn the results of a democratically conducted election through violence, including threats to hang the Vice President for allowing the certification of electoral votes.

First, we must establish a consensus on what constitutes a crime. For example, during a pandemic that has already claimed millions of lives, deliberately spreading lies and deceiving the public about false cures is a clear case of criminal activity. In such instances, figures like Elon Musk have an ethical obligation to prevent this content from being shared on their platforms. If they fail to act, I believe the government has a duty to intervene to stop such harmful communication.

This brings us to the debate on the limits of free speech. Elon Musk presents himself as an absolutist regarding free speech, and we can certainly debate how far I or anyone else leans toward free speech absolutism. Personally, I draw a clear line when speech leads to criminal activity or endangers human lives. It is not difficult to identify such communications online, and when Musk fails to remove this kind of content, I believe he is culpable.

In such cases, the government—though certainly not under Trump, as he and Musk appear to be allies—has a responsibility to engage with Musk and press him to adopt more responsible policies.

Government Intervention in Communication Is Far More Dangerous

U.S. President-elect Donald Trump at a rally for then-VP nominee J.D. Vance in Atlanta, GA, on August 3, 2024. Photo: Phil Mistry.

You argue that it’s concerning that tech executives can exercise so much power over who can use their platforms. But the alternative – government intervention – could be much worse. You argued this before Elon Musk was appointed to a significant post in the second Trump administration. Do you still think the same?

Professor Paul Levinson: Yes, because, as I mentioned, the government wields military power. While corporations can be problematic, and it is undeniably concerning for the richest person in the world to hold so much power that they can essentially do whatever they want—even if they lose millions of dollars and still remain the wealthiest—it is far more dangerous for the government to be involved in communication.

Let me give you another example of this—a relatively minor one, but still important. Some people may remember the Falklands War in the 1980s. Argentina wanted the United Kingdom to relinquish control of the Falkland Islands, which are located off Argentina’s coast. Understandably, Argentina questioned why the UK was still holding on to these islands, which they had seized during the colonial era.

At that time, Margaret Thatcher was the Prime Minister of the UK. She wanted to project toughness and refused to give up the islands, leading to war. The BBC, the British Broadcasting Corporation, unlike media systems in the United States, is not independent of the government. It is part of the British government, and naturally, it reported on the war.

One day, the Argentine forces inflicted significant damage on the British Expeditionary Force in the Falklands. The British government, under Thatcher, didn’t want the British public to know about this, fearing it would provoke public outrage. So, they instructed the BBC not to broadcast or report the news.

This demonstrates the immense power of governments, even in democracies like the United Kingdom. The government effectively told the nation’s primary broadcasting organization, “Don’t report that.” This is precisely the kind of government overreach that concerns me here in America and across Western democracies, where fascist tendencies have been gaining ground.

When governments gain such control, they can jeopardize democratic systems, even those that have existed for hundreds of years. This is why I continue to believe that government intervention in communication is far more dangerous than the unchecked power of tech executives.

Violating the Spirit of the First Amendment Is Not as Severe as Violating the First Amendment Itself

You declare yourself a First Amendment radical, i.e., a staunch supporter of the First Amendment, which says Congress shall make no law abridging free speech. Yet, you have supported Twitter’s ban on Donald Trump. Don’t you think there is a contradiction between these two positions? Where should the ethical line be drawn for social media platforms when balancing freedom of expression with the risk of harm caused by certain types of speech?

Professor Paul Levinson: First of all, I’d like to draw a distinction between the First Amendment itself and what I call the spirit of the First Amendment.

The First Amendment says, “Congress shall make no law abridging freedom of speech or the press.” Through the 14th Amendment, which was enacted after the Civil War in the 1800s, this prohibition on federal government interference with communication was extended to state governments and, in general, to municipalities, including cities. Over the years, the Supreme Court has correctly ruled that no government can interfere with communications—again, unless it involves some kind of criminal activity. That’s the First Amendment.

Now, let’s take an example like the Grammy Awards. These awards, given for the best music in a given year, are broadcast on American television stations like CBS. During a rap artist’s performance, where cursing and vulgarity are often part of the genre, viewers might hear bleeps censoring certain words. What’s happening there? CBS is bleeping those words because they fear their sponsors might object, or that the Federal Communications Commission (FCC) might penalize them by refusing to renew their license.

For the record, I believe the FCC is unconstitutional because it violates the First Amendment—it’s a government agency that interferes with communication. Nevertheless, CBS’s actions, while cowardly in my opinion, do not violate the First Amendment. Instead, they violate the spirit of the First Amendment because CBS is not the government.

Similarly, when Elon Musk or, before him, the previous owners of Twitter banned Donald Trump from the platform, they were not acting as representatives of the government. In Trump’s case, his tweets were rightly perceived as contributing to the instigation of the attack on the Capitol in January 2021—a criminal activity. For this reason, I believe banning him from the platform was the correct decision. However, this action was taken by a private social media company, not the government. As such, while it may have violated the spirit of the First Amendment, it did not violate the First Amendment itself.

In general, my position is that the spirit of the First Amendment should be respected, as censorship is rarely beneficial. However, violating the spirit of the First Amendment is not as severe as violating the First Amendment itself.

To illustrate a clear violation of the First Amendment, consider when President Richard Nixon attempted to prevent The New York Times and The Washington Post from publishing the Pentagon Papers. Nixon argued that publishing the papers would undermine his war effort in Vietnam. Fortunately, the Supreme Court correctly ruled that such an action would violate the First Amendment and voted against Nixon, affirming that a US president cannot impose restrictions on what newspapers can publish. This case represents a classic and correct application of the First Amendment.

The Danger of Elon Musk Holding Power in Both Government and Social Media

Elon Musk, founder, CEO, and chief engineer of SpaceX; CEO of Tesla; CTO and chairman of X (formerly Twitter); and co-founder of Neuralink and OpenAI, at VIVA Technology (Vivatech) in Paris, France, on June 16, 2023. Photo: Frederic Legrand.

You suggest that market forces can effectively counterbalance the dominance of tech giants, as seen with Microsoft’s decline in influence. Do you believe similar market corrections are plausible for current tech behemoths like Twitter or Amazon, given their role as gatekeepers of global communication?

Professor Paul Levinson: Yes, I do. Let’s go back to what I was saying about Microsoft. This happened in the 1990s when Microsoft was at its peak, and Bill Gates was probably the richest man in the world. There was a lot of talk about breaking up Microsoft—claims that it had a monopoly, too large a market share, and that this dominance was unhealthy for the intellectual and economic well-being of the country.

Even back then, I said, “Take it easy.” The market will regulate itself; there’s no need to rush into breaking up the Microsoft corporate system. People were reacting to something that had only happened in the last year or two. I suggested we wait and see what would happen. Sure enough, by the late 1990s and into the 21st century, Microsoft’s influence had already started to decline, and new giants like Amazon were beginning to grow.

Once again, I am more concerned about the government regulating any communication system than I am about the damage caused by such systems. Consider Donald Trump returning to the White House—he’s already naming some of the bizarre people (and that’s putting it kindly) he plans to appoint to important positions in his cabinet and administration.

The last thing I want to see is a scenario where the government goes after MSNBC, an important progressive voice in cable television, or NBC as a whole, claiming they have too much power and must be broken up. That kind of government intervention poses a greater threat to democracy than allowing corporate systems to continue operating.

Now, I’m not saying I’m thrilled about the power Elon Musk holds. In fact, I need to emphasize this point: Trump has stated he wants to put Musk in charge of a new government agency tasked with making the government more efficient. While I’m all for making the government more efficient, I am deeply opposed to having the person who owns X (formerly Twitter) also hold a high-ranking government position. That kind of overlap means the government could end up controlling communication platforms.

As for Musk, I’m not overly concerned about most of the things he’s done so far. What does concern me is the idea of him simultaneously being a member of the new administration and maintaining his powerful position at X. If Musk were a true believer in free speech, he would either divest himself of X or refuse the government post. But I think we both know he’s likely to do neither.

Projection Is a Hallmark of Fascism

You argue that Donald Trump turned the concept of “fake news” into a tool to undermine legitimate media. What long-term impact do you think this has on public trust in journalism and the democratic process? 

Professor Paul Levinson: It’s already had a very negative effect, and it’s one of the worst things Donald Trump has done. I remember watching television back in January 2017, shortly after Trump had been elected president in the 2016 election. As president-elect, he was holding a news conference here in New York City. At the end of the conference, reporters raised their hands to ask questions.

A prominent CNN reporter, Jim Acosta, raised his hand, and Trump looked at him and said, “I’m not going to call on you. You’re with CNN, right? You’re fake news.” I remember thinking, “Wow, that’s a pretty clever thing Trump is trying to do.”

CNN was not spreading fake news in any way. It was truthfully reporting on things that made Trump look bad. For Trump, however, anything that embarrasses or criticizes him is automatically labeled as “fake news.” Whether the idea originated with Trump or one of his advisers, it’s a brilliant but dangerous way of undermining criticism.

This tactic reflects what Sigmund Freud called projection. When we look at the world and disagree with someone, we project our own intentions onto them, accusing them of doing what we plan to do. This, in turn, justifies actions against them. Projection is a hallmark of fascism. It’s something Hitler did. It’s something Stalin did. Stalin referred to the press as the “enemy of the people,” which is another favorite term of Trump. In Nazi Germany, during the 1930s, Joseph Goebbels popularized the term Lügenpresse, meaning “lying press”—essentially, fake news.

What amazes me is how many people have fallen for this tactic in 2024, and indeed, over the past decade, despite the lessons we should have learned from the 1930s. Unfortunately, it highlights just how ignorant many people are of history.

The Greatest Threat to American Democracy Since the Civil War

How do you think American people and American institutions will react to second Trump administration?

Professor Paul Levinson: I don’t know, and I have to tell you, I am deeply concerned. I think the United States of America is facing the worst threat to our democracy since the Civil War.

The election results obviously surprised and stunned a lot of people. I’ll just note, parenthetically, that once again, the polls were off. They predicted a razor-close race. While Trump didn’t win by a landslide, he did secure an impressive victory. Even here in New York State, where the Democrats won, they did so by a smaller margin than Joe Biden or even Hillary Clinton had achieved.

This election revealed a significant aspect of American life and I thought that many, including myself, didn’t fully recognize before the election. It’s a deeply troubling realization. As historians know, it’s not as though Germany had an autocratic system in place before Hitler’s rise to power. The Weimar Republic was actually a strong democracy with a robust constitution.

Fascism often doesn’t seize power through a coup d’état—though that can ultimately happen—but rather by undermining democratic systems and turning them against themselves. That’s what makes this such a deeply concerning time.

I’m an optimist, so I hope that the worst won’t happen. But at this point, it just remains to be seen.

Trump’s Appointments Are Not Just Concerning, They Border on Absurdity

Independent presidential candidate Robert F. Kennedy introduced his running mate, Nicole Shanahan, during a campaign event in Oakland, California, on Tuesday, March 26, 2024. Photo: Maxim Elramsisy.

And lastly, Professor Levinson, there are those who are deeply concerned about the future of American democracy under a second Trump administration. Some argue that American democratic institutions may not survive. Where do you stand in this debate?

Professor Paul Levinson: Well, as I just said, I’m very worried. During Trump’s first administration, many of the people he appointed seemed to operate under the mistaken belief that, while Trump might be a little unhinged, they could keep him in check. They thought they knew what was right and would steer him accordingly. Trump’s response to that? He fired anyone who disagreed with him.

He famously dismissed James Comey, the FBI director, and Rex Tillerson, his Secretary of State. Trump became infamous for firing people, both in his presidency and on The Apprentice. This time around, however, he’s being much more calculated in his appointments.

The only person he has appointed so far who, in my view, is not completely unfit for the role is Marco Rubio, a senator from Florida who is now Secretary of State. While I don’t agree with Rubio’s policies, at least he’s not irrational. Unfortunately, the same cannot be said for many of Trump’s other appointees.

For example, Matt Gaetz, recently appointed Attorney General, was until recently a member of the House of Representatives. He resigned to take this post despite being the subject of an investigation involving allegations of sex trafficking, including minors. The idea of someone with such a history holding the top legal position in the country is deeply troubling.

Then there’s Dr. Mehmet Oz. Yes, he’s an MD, but he hasn’t practiced medicine in years and is better known as a television personality. He’s been appointed to lead the CDC or a similar health organization—it’s hard to keep track.

Or take Robert F. Kennedy Jr., who has been appointed Secretary of Health. While he’s Robert F. Kennedy’s son, his anti-vaccine stance goes against the very measures that saved millions of lives during the COVID pandemic. These appointments are not just concerning; they border on absurdity.

At this point, I’m holding out hope that the Senate, which is currently split 50-50 between Democrats and Republicans, might reject some of these nominees. However, it’s unclear whether that will happen. I don’t have a crystal ball, but if I did, I’d see nothing but clouds and stormy weather ahead. Unfortunately, I can’t see through the storm.

Digital

Authoritarian Information Manipulation and Dissemination — National, Transnational, and International Perspectives

 

DOWNLOAD WORKSHOP BOOKLET

The emergence of repressive and authoritarian “hybrid regimes” poses one of the most significant threats to democracy today. These regimes and authoritarian actors wield information suppression and manipulation as essential tools to disseminate narratives that erode democratic institutions. This issue transcends national borders; digital technologies now enable authoritarian states to infiltrate robust democracies, allowing them to project their authoritarian narratives globally. The transnationalization of authoritarian politics, facilitated by digital technologies, presents substantial challenges to the integrity of democratic processes and institutions.

In response to these challenges, a workshop which is a collaborative effort organized on November 7-8, 2024, by the Alfred Deakin Institute for Citizenship and Globalisation (ADI) at Deakin University, Australia, and the European Center for Populism Studies (ECPS) in Brussels, Belgium. The workshop aimed to investigate how various actors—governments, non-state organizations, state-sponsored entities, and political parties—suppress and manipulate information to erode trust in democratic processes, both domestically and internationally. The workshop also examined the darker dimensions of social media, focusing on the interactions between misinformation, negativity, and polarization.

Moreover, the workshop addressed strategies to counter misinformation and disinformation, along with intervention techniques to mitigate their impacts. It also focused on countering disinformation through activism and explored everyday online experiences with misinformation, emphasizing the importance of evidence-based media literacy education initiatives. Additionally, the event discussed necessary curricular reforms to combat disinformation, toxicity, and polarization in educational contexts, as well as the responses of political elites to conspiracy theories.

The aim of the workshop, funded by the Australian Political Studies Association (APSA), the Australian Research Council (ARC), and the Gerda Henkel Foundation, is to deepen the understanding of these critical issues and explore collaborative strategies to combat misinformation and disinformation in our increasingly complex digital environment.

Round Table 1 – Foreign Interference Campaigns on Social Media: Insights from Field Theory and Computational Social Science

Keynote by Dr. Robert Ackland (Professor, The Australian National University)

 

Round Table 2 – Manipulating Truth: Authoritarian Strategies of ‘Attention Bombing’ and ‘Epistemic Modulation’ in Hybrid Media Systems

Keynote by Dr. Timothy Graham (Associate Professor, Queensland University of Technology)

 

Round Table 3 – The Dark Side of Social Media: Misinformation, Negativity, and Polarization

Keynote by Dr. Jason Weismueller (Assistant Professor, University of Western Australia)

 

Round Table 4 – The Influence of Familiarity and Identity Relevance on Truth Judgements

Keynote by Dr. Li Qian Tay (Postdoctoral Fellow, The Australian National University)

 

Round Table 5 – Countering State-Sanctioned Information Operations: The #FreeYouth Movement in Thailand

Keynote by Dr. Aim Sinpeng (Associate Professor, The University of Sydney)

 

Round Table 6 – Investigating Everyday Online Experiences with Misinformation and Responding with Evidence-Informed Media Literacy Education Initiatives

Keynote by Dr. Tanya Notley (Associate Professor, Western Sydney University)

 

Round Table 7 – Reforming the Curriculum to Counter Disinformation, Toxicity, and Polarization

Keynote by Dr. Mathieu O’Neil (Professor, The University of Canberra; Honorary Associate Professor, The Australian National University)

 

Round Table 8

Ignore, Rebut or Embrace: Political Elite Responses to Conspiracy Theories

Keynote by Dr. Zim Nwokora (Associate Professor, Deakin University)

And

Disinformation in the City Response Playbook

Keynote by Dr. Jessica (Ika) Trijsburg (Research Fellow in City Diplomacy at the Melbourne University)

 

A moment from the International Conference on ‘Digital Complexity and Disinformation in the Indo-Pacific,’ held in a hybrid format from Melbourne on September 25-26, 2024.

International Conference on ‘Digital Complexity and Disinformation in the Indo-Pacific’

DOWNLOAD CONFERENCE BOOKLET

 

Explore the insightful discussions from the International Conference on ‘Digital Complexity and Disinformation in the Indo-Pacific, held on September 25-26, 2024, in a hybrid format from Melbourne. This conference brought together a diverse coalition of experts, hosted by leading institutions across the Indo-Pacific and Europe, including the Alfred Deakin Institute at Deakin University, Universitas Indonesia, National Research and Innovation Agency (BRIN), Universitas Gadjah Mada, Universitas Muhammadiyah Malang, International Islamic University Malaysia, UIN Salatiga, and the European Center for Populism Studies (ECPS).

The conference delved into how digital technologies, though transformative, have become tools for disinformation, political manipulation, and digital authoritarianism, posing serious challenges to democracy and social unity. This issue is particularly urgent in the Indo-Pacific, where misinformation on platforms like Facebook, Twitter, Instagram, Telegram, and WhatsApp has fueled divisions and where political forces sometimes restrict access to vital digital spaces to consolidate control.

Attendees, including scholars, practitioners, and policymakers, shared perspectives on how digital disinformation affects the region and discussed strategies for promoting digital literacy, inclusivity, and democratic resilience.

Generously supported by the Australian Research Council, Gerda Henkel Foundation, ECPS, and the Alfred Deakin Institute, this conference aimed to foster collaboration and shed light on countering disinformation in today’s digital age.

Don’t miss the opportunity to engage with these compelling sessions—watch the full conference videos here:

Video 1

Video 2

Video 3

Video 4

Video 5

 

Data protection concept featuring binary code overlayed with the European Union flag. Photo: KB-Photodesign.

Future Resilience of the European Technology Security Policy Paper

DOWNLOAD POLICY PAPER

Please cite as:

Miguel De Vera, Anton; Hamaiunova, Viktoriia; Koleszár, Réka & Pasquettaz, Giada. (2024) “Future Resilience of the European Technology Security.” Policy Papers. European Center for Populism Studies (ECPS). November 4, 2024. https://doi.org/10.55271/pop0004

 

Abstract

This paper explores vulnerabilities in the European Union’s technological security, focusing on Huawei as a case study to illuminate broader security challenges. Amid intensifying US-China tensions, especially under former US President Donald Trump, the EU encountered new risks linked to the strategic positioning of Chinese tech firms within critical European infrastructure. Trump’s “America First” policy targeted China with tariffs and trade restrictions to address perceived unfair practices, triggering disruptions in global supply chains that reverberated through the EU economy. For Europe, heavily reliant on secure, stable trade flows, these events highlighted the urgency of reassessing technological dependencies and reinforcing digital security. The paper presents a series of strategic recommendations for the EU to mitigate such vulnerabilities, emphasizing the need for diversified supply chains, rigorous security standards for tech partnerships, and collaborative policies among EU members to strengthen resilience in the face of geopolitical shifts and technological competition.

Keyword: Populism, EU, Framing, US, China, Technology

 

Authored by Anton Miguel De Vera, Viktoriia Hamaiunova, Réka Koleszár & Giada Pasquettaz

Introduction

In the increasingly uncertain geopolitical climate, the European Union (EU) is facing the challenge of maintaining its technological resilience while protecting its security and autonomy. The fast-paced international competition for technological leadership is closely tied to the bloc’s economic competence and has consequences for its security. Given the importance of transatlantic cooperation in this domain, the upcoming US elections, and the possibility of a second Trump administration should urge policymakers to focus on strengthening the EU’s preparedness. This paper addresses the existing vulnerabilities in the EU’s technological security through the exemplary case of Huawei and outlines recommendations on how to tackle them.

Connectivity, one of the critical technologies of the rapid Fourth Industrial Revolution, has been at the center of heated discussions in recent years. Several nations identified connectivity to be an essential part of their competitiveness and development and, among others, Huawei emerged at the forefront of advanced technologies. The Chinese-owned ICT provider was among the world leaders in rolling out their next-generation telecommunication networks worldwide. Within the EU, the choice of 5G providers has generated crucial debates. Next to the obvious economic interests, building telecommunication networks came with important security considerations. As the US-China rivalry intensified under President Trump, the EU faced an important vulnerability.

Donald Trump’s trade war with China, a key component of his “America First” agenda, had significant repercussions for the EU. By imposing tariffs on Chinese goods, Trump sought to counter what he perceived as unfair trade practices by China. This conflict disrupted global trade and impacted the EU’s economy, which is heavily dependent on stable supply chains.

For the EU, the escalating US-China trade tensions presented both challenges and opportunities. While the trade war resulted in market volatility, it also provided Europe with a chance to strengthen its trade relationships with China. The two reached an agreement in principle on a comprehensive agreement on investment (CAI) in 2020 – although it was later put on hold due to the tit-for-tat sanctions. The prospect of deepening ties with China posed a risk of straining transatlantic relations, particularly as Trump urged European nations to collaborate with the US in pressuring Beijing. Trump’s populist trade policies thus compelled the EU to carefully balance its relationships with both the US and China while prioritizing its own economic and security interests. It is in this context that the debate around Huawei and the EU’s technological security is situated in.

The EU’s 5G Rollout:  Rhetoric Coercion and Uneven Progress

The European Commission identified the possibilities of 5G early on and adopted an action plan in 2016 to launch 5G services in all member states by the end of 2020 (European Commission, 2024). Although some experts warned that the EU is falling behind in technological transformation, member states quickly began catching up and published their roadmaps. However, progress was uneven and fragmented (5G Observatory Quarterly Report 2, 2019). At that time, Huawei was in a prime position in the European market to support the 5G rollout and was already working with several European providers. By 2019, the Chinese company signed memorandums of understanding with wireless providers in at least 9 EU countries, including Germany, Spain, and France (5G Observatory, 2021). For many, it seemed evident that for the EU to stay competitive and meet the plans for 5G coverage, Huawei was the answer.

In parallel, however, concerns about the security of Huawei equipment began circulating. Against the backdrop of the escalating trade war between the US and China, the former began prompting allies to exclude Huawei from their networks (Woo & O’Keeffe, 2018). President Trump labelled Huawei a security risk and threatened to cut off intelligence and information-sharing with allies using the ‘untrustworthy’ 5G vendor (Business Standard, 2020).

US Policy towards China under Donald Trump: Framing as a Strategic Tool

Donald Trump’s political rise is often analyzed through the lens of populism and framing theory, both of which help explain his appeal and communication strategies. Populism, broadly defined, refers to a political approach that pits the “common people” against a perceived corrupt elite (Mudde, 2004). Trump’s rhetoric embodies this populist style, as he frequently claims to speak for ordinary Americans against the political establishment. His 2016 campaign, for instance, centered on “draining the swamp” in Washington, positioning himself as an outsider who would challenge entrenched elites. During the 2024 election, he is still using this populist communication, by portraying himself as “one of the people”, like in one of his recent tweets where he works for one shift in McDonalds.

One of the key aspects of Trump’s populism is his use of framing. He does not only use it on a national level for criticizing his opponents but also in relation to foreign policy issues. Framing theory, as defined by Entman (1993), involves highlighting certain aspects of a reality while downplaying others, effectively shaping how an issue is understood by the public. Trump’s framing of China is a prime example. Throughout his presidency and during his campaigns, Trump consistently framed China as a threat to American economic interests and national security. By doing so, he shaped public discourse and channeled public frustrations about job losses and trade imbalances into hostility toward China.

A prominent example of Trump’s framing of China came during his trade war with the country. He portrayed China as an “unfair” player in global trade, accusing it of “stealing” American jobs and intellectual property. In a 2019 speech, Trump stated, “China has taken advantage of the United States for many, many years. And those days are over.” This framing was effective in galvanizing his political base, particularly among working-class voters who felt economically marginalized by globalization (Inglehart & Norris, 2016). By framing the issue as a battle between patriotic Americans and a foreign adversary, Trump reinforced his populist credentials.

Trump’s framing of China intensified during the COVID-19 pandemic, where he repeatedly blamed China for the spread of the virus, referring to it as the “China virus” and the “Kung flu” (The New York Times, 2020). By doing so, he shifted public discourse to portray China as responsible not only for the economic challenges faced by the US but also for the public health crisis, a narrative that resonated with many of his supporters.

A notable example of this framing came in March 2020, when Trump tweeted, “The United States will be powerfully supporting those industries, like Airlines and others, that are particularly affected by the Chinese Virus.” This statement reported widely in the media, sparked accusations of racism and xenophobia (CNN, 2020). However, Trump defended his rhetoric, arguing that it was necessary to hold China accountable for the pandemic’s global spread. His framing successfully linked the frustrations over COVID-19 to broader concerns about China’s role in the world economy, feeding into his populist narrative of protecting American interests.

Framing theory is particularly relevant here because it highlights how political actors shape public perception by focusing on certain narratives. As Entman (2007) notes, framing involves selecting some aspects of a perceived reality and making them more salient in communication. Trump’s framing of China as both an economic competitor and a national security threat played a significant role in justifying his tariffs and aggressive foreign policy stance. Moreover, Trump’s use of this frame was amplified by the media, contributing to rising anti-China sentiments in the US (Goffman, 1974).

By framing China as a direct threat to American prosperity, Trump not only advanced his populist message but also reshaped political discourse, making foreign policy a central issue for many voters. Through this, he created the basis of US trade policy against foreign companies deemed as a threat and towards allies who seemed hesitant to follow this approach.

With all this, the EU faced a two-fold dilemma: giving in to Trump’s strategy and losing out on competitiveness while appearing to have little strategic autonomy or seizing the opportunities with Huawei but straining the transatlantic relationship while potentially endangering critical infrastructure. As of 2024, the EU’s answer has been fragmented and disunited. Only 10 of the 27 member states have excluded Huawei and although almost all states put in place some kind of restrictions, only a handful of them implemented it (European Commission, 2023a). President Trump’s approach of pressuring allies and threatening to cut off intelligence-sharing may have been counterproductive, but it exposed an important weakness of the EU. 

What Next – The Way Forward

With the US elections approaching, the EU has a window of opportunity to address this dilemma. The possibility of a second Trump administration brings the risk of further aggravating the US-China ties and putting the EU into an even more uncomfortable position. The war in Ukraine has heightened the EU’s need and dependence on intelligence-sharing with the US Upcoming challenges in transatlantic relations are likely to have significant repercussions for the EU’s security. At the same time, the EU-China relations are also at a heightened risk of entering into a trade war as the latest developments around the export of Chinese electric vehicles demonstrate. The economic vulnerability of certain European member states to Chinese pressure adds another dimension to the complex nature of achieving united European approaches. Essentially, the EU needs to safeguard its autonomy against unilateral actions while maintaining its competitiveness and ensuring the security of its critical infrastructure. To do that, policymakers should consider the following scenarios and the presented policy recommendations.

If Trump Wins

First, in case of a Trump victory, Europeans have to embrace another period of uncertainty. A second Trump Administration will renew concerns about US support for NATO while the protectionist policies will put direct pressure on transatlantic trade relations. It is expected that President Trump will continue his previous hardline approach towards China leading to an intensified trade war and a bigger volume of Chinese exports being dumped on the European market. All the while, Europeans will increasingly be pulled into a trade and technology war with the Eastern power amid calls from the US to reduce relations. In this scenario, Trump’s rhetorical pressure, as in the previous case of calling to exclude Huawei from the 5G rollout to maintain intelligence-sharing, might turn into actual policies. In 2025, this would come with a huge price given the EU’s dependence on the American intelligence infrastructure to help Ukraine defend itself against Russia’s war. Any threats thus must be taken seriously and addressed accordingly.

Next to that, internally, Trump’s success would galvanize far-right, populist figures and movements. His ideological allies in Europe, such as Hungarian Prime Minister Viktor Orbán, Italian Prime Minister Giorgia Meloni and Polish President Andrzej Duda would be emboldened to continue their path after a Trump victory. Far-right, populist politicians would find renewed reassurance to oppose more European integration. Consequently, reaching unity on crucial foreign policy questions might further be hindered.

Faced with the prospect of this challenging situation, European policymakers would do well to address the potential pitfalls early on. Given the foreseeable fragmentations, the EU must strengthen and implement the framework it already has agreed upon (such as the 5G Cybersecurity Toolbox and the Digital Services Act). According to the latest assessment of the 5G Toolbox, which was adopted to mitigate security risks, only 10 out of the 27 Member States have restricted or excluded high-risk suppliers from their 5G networks (European Commission, 2023b). Based on its own and Member States’ independent analyses, the European Commission considers Huawei along with another Chinese company, ZTE, to ‘pose materially higher risk than other 5G providers.’ Dependency on these providers for critical infrastructure, which the 5G network is considered, creates a serious risk across the Union. Considering the level of interconnectedness between EU networks, a fragmented policy could jeopardize the entire bloc’s security. For instance, last year Hungary’s Minister of Foreign Affairs, Péter Szijjártó highlighted Hungary’s development of 5G networks with the help of Huawei, next to signing additional cooperation agreements with the company (Szijjártó Péter, 2023).  

To address the diverging approaches, the EU should develop a mechanism to actively encourage Member States to implement the existing framework and use the available tools. It should also hold Member States accountable for doing so. Considering the weight of risks in the EU’s technological security, policymakers should call for an EU-wide regulation with clear and urgent deadlines. This would support the EU’s autonomy in making security-related decisions as assessments of risks are done both by Member States and by the European Commission. Transatlantic relations are likely to become more friendly as a result and the EU’s security would increase. One of the downsides of this approach, however, is the expected response from Beijing. China is likely to retaliate for a European policy naming and restricting its companies from the market. Besides, reaching this agreement on a European level will not be easy as Member States’ security priorities and relations with China differ significantly. Nevertheless, this approach offers the EU a starting point to be a proactive actor.

If Harris Wins

If Americans choose a Harris administration for the next four years, the EU would find itself in a similar position as they were during Biden’s administration assuming that Harris will take up a similar approach against China. Despite their opposition to each other, President Joe Biden had taken a similar approach to his Republican predecessor. Biden ordered heavy tariffs on Chinese imports of high-tech items such as semiconductor chips while diversifying its sources for imports such as the EU and Mexico (Davis, 2024; Lovely et al., 2024). In doing so, the United States has become less dependent on China for all types of imported manufactured goods since 2018, according to recently released 2023 customs data (Lovely et al., 2024). 

The EU and China, however, have maintained or increased their reliance on each other for almost all types of imported goods” (Lovely et al., 2024). As such, the EU could potentially clash with the US by maintaining this dependence which showcases some form of limited autonomy. On the one hand, the EU exercises its agency to shift towards maintaining and deepening ties with China. However, on the other hand, the EU’s agency is somewhat limited given its trade dependency with China which may compel it to act in favor of Beijing on certain issues.

A Harris administration would likely maintain the use of tariffs, particularly targeting China, to counter perceived unfair competition as emphasized by Trump, and to drive progress in the US energy transition, supporting its emissions reduction goals. This was evident during the presidential debate between Harris and Trump in September 2024. She highlighted Trump’s failed attempt to subdue China as an economic powerhouse arguing that “under Donald Trump’s presidency, he ended up selling American chips to China to help them improve and modernize their military” (Butts, 2024). She concluded with the statement, ″[he] basically sold us out when a policy about China should be in making sure the United States of America wins the competition for the 21st century” (Butts, 2024). This comment indicates to the EU and other US allies that Harris is likely to continue Biden’s approach if she wins the presidential race.

In this scenario, the EU faces a more predictable transatlantic landscape. This, however, may prove more perilous. Albeit Harris will follow a hardline approach to China and the pressure on allies to not share advanced technology with Beijing will remain, she is unlikely to strongly push the EU. In contrast to the Trump administration, instead of coercive rhetoric, she is likely to use softer means of persuasion. This carries in itself the risk that the EU will sit on its hands for too long instead of addressing the legitimate security threats that China poses. To ensure that the resilience of technological security remains a priority, the European Parliament should establish a sub-committee of the Committee on Industry, Research and Energy (ITRE). The sub-committee should deal with the security considerations that come with technologies and equipment from third countries and should ensure that the interests of European citizens are considered in tech security-related questions. This would address the risks of de-prioritization and would contribute to enhanced and more nuanced debates. Considering the viewpoints of Members of the Parliament directly through the sub-committee could help the European Commission to propose regulations that are more likely to enjoy support. The only constraining factor to consider is the budget of setting up the sub-committee but the importance of this issue should outweigh that.

Conclusion

This paper highlighted the importance of European technology security and looked at different scenarios European leaders will face during the US presidential election. The example of the rollout of the 5G technology in the EU and the debates around using Chinese Huawei as the technology provider illustrated the EU’s vulnerability when it comes to maintaining its autonomy and competitiveness in the tech sector. In the rapidly changing global landscape, EU leaders are facing a crucial dilemma about the way forward. To maintain technological competitiveness, the EU may have no choice but to rely on Chinese partners while to ensure the continent’s security and stability, it cannot afford to alienate its key transatlantic partner. At the same time, legitimate security risks should not be overlooked and considered as subordinate to trade relations.

This paper offers a concise depiction of the main factors EU leaders should consider as Americans head to the polls. In either scenario, what is crucial for the EU is to be prepared and engage in collective planning. A second Trump administration is likely to bring about a more hectic and turbulent period. His framing of China as a security threat could lead to more pressure on European allies to cut ties with Beijing while his victory could galvanize European populists making it harder to achieve consensus on the European level. To offset this, the paper recommends taking concrete steps to implement the already existing framework and strengthen the available toolbox. In case of a Harris victory, the EU can expect reasonable continuity. Perhaps an important challenge the bloc will face will be finding the impetus to keep the technology security issue in focus. The paper argues that one way to do that would be to set up a dedicated sub-committee within the European Parliament to keep the issue on the agenda and ensure the interests of European citizens.


 

Authors’ Biographies

Anton Miguel De Vera is an MA student in International Business and Economic Diplomacy at IMC FH Krems. He previously earned a bachelor’s degree in Philosophy, Politics, and Economics from Central European University in Vienna, where he specialized in International Relations and Economics. His thesis examined the dynamics of Philippine agency within the US-Philippine security alliance and its nuanced relationship with China, entitled “The Faces of Philippine Agency in Foreign Affairs: The Philippines and the United States Security Alliances”. Currently based in Vienna, Anton works at Raiffeisen Bank International, where he combines his academic expertise with practical experience in finance and international relations.

Viktoriia Hamaiunova is a Ph.D. candidate at Newcastle University (UK), where she investigates the role of legal culture in shaping fair trial standards within ECHR member states, focusing on the integration of mediation into judicial systems to enhance human rights protections. Her research combines doctrinal and non-doctrinal approaches, incorporating thematic analysis and insights from interviews with ECtHR judges to examine how legal culture influences judicial reform and access to justice. Viktoriia Hamaiunova holds an MA in International Law and Human Rights from the University of Tartu, enriched by academic exchanges at Masaryk University and Comenius University.  Her legal career includes in-house experience and ECtHR  traineership. An accredited mediator and published author, Viktoriia Hamaiunova has presented her work at prominent conferences, including SLSA Annual Conference and the Human Rights Law Conference at the University of Cambridge. With extensive teaching experience, she leads discussions on topics spanning international law to mediation practices. As an interdisciplinary researcher, Viktoriia Hamaiunova is committed to culturally informed legal reforms, fostering development and facilitating discussions on effective judicial systems and dispute resolution. 

Réka Koleszár is an independent researcher focusing on the relations between the European Union and Asia, in particular East Asia. Her experience spans international organizations and think tanks including working for the Council of the European Union and the European Policy Centre. Réka holds an MSc in Political Science from Leiden University, an MA in International Relations specializing in East Asian studies from the University of Groningen, and a diploma in the Art of Diplomacy from the European Academy of Diplomacy.

Giada Pasquettaz is a doctoral student at the Chair of Political Science and International Politics of Prof. Dr. Dirk Leuffen since October 2023. Her interests are mainly in political communication, international relations, political behavior, comparative politics and quantitative methods. She holds a master’s degree in mass media and politics with a focus on international social movements’ communication from the University of Bologna. She also completed her bachelor’s degree in Sociology at the University of Bologna with a specialization in migration frames used in media. She completed semesters abroad at the University of Sundsvall (Sweden), at UCLouvain (Belgium) and at the UIT Tromsø (Norway).


 

References

— (2016, May 2). “Trump accuses China of “raping” US with unfair trade policy.” BBC News. https://www.bbc.com/news/election-us-2016-36185012

— (2020). “Trump threatens to cut intelligence-sharing ties with nations over Huawei.” Business Standard. https://www.business-standard.com/article/pti-stories/trump-threatens-intelligence-block-over-huawei-us-diplomat-120021700106_1.html

— (2020, March 19). “Trump again defends use of the term ‘China virus’” CNN. https://edition.cnn.com/2020/03/17/politics/trump-china-coronavirus/index.html

— (2020, March 18). “Trump defends calling coronavirus the ‘Chinese virus’.” The New York Times. https://www.nytimes.com/2020/03/18/us/politics/china-virus.html

— (2021). “Major European 5G trials and pilots.” 5G Observatory. https://5gobservatory.eu/5g-trial/major-european-5g-trials-and-pilots/

Bose, N., & Shalal, A. (2019, August 7). “Trump says China is ‘killing us with unfair trade deals’.” Reuters. https://www.reuters.com/article/world/trump-says-china-is-killing-us-with-unfair-trade-deals-idUSKCN1UX1WU/

Butts, D. (2024, September 11). “Harris says Trump “sold us out on China”: Highlights from the presidential debate on trade and tariffs.” CNBC. https://www.cnbc.com/2024/09/11/harris-vs-trump-on-china-debate-highlights-on-trade-and-tariffs.html

Davis, B. (2024, October 28). “How Washington Learned to Stop Worrying and Embrace Protectionism.” Foreign Policyhttps://foreignpolicy.com/2024/09/10/us-protectionism-biden-trump-tarrifs-harris-china/

Entman, R. M. (1993). “Framing: Toward clarification of a fractured paradigm.” Journal of Communication, 43(4), 51–58. https://doi.org/10.1111/j.1460-2466.1993.tb01304.x

Entman, R. M. (2007). “Framing bias: Media in the distribution of power.” Journal of Communication, 57(1), 163-173. https://doi.org/10.1111/j.1460-2466.2006.00336.x

European Commission. (2019). “5G Observatory Quarterly Report 2.” http://5gobservatory.eu/wp-content/uploads/2019/01/80082-5G-Observatory-Quarterly-report-2-V2.pdf

European Commission. (2023a). “5G security: The EU case for Banning High-risk suppliers: Statement by commissioner Thierry Breton.” https://ec.europa.eu/commission/presscorner/detail/en/statement_23_3312

European Commission. (2023b). “Communication from the commission: Implementation of the 5G cybersecurity toolbox.” https://digital-strategy.ec.europa.eu/en/library/communication-commission-implementation-5g-cybersecurity-toolbox

European Commission. (2024). “5G.” https://digital-strategy.ec.europa.eu/en/policies/5g

Goffman, E. (1974). Frame analysis: An essay on the organization of experience. Northeastern UP.

Gordon, J. (2024, September 30). “Kamala Harris and trade: Better than the alternative, but not much.” Lowy Institute. https://www.lowyinstitute.org/the-interpreter/kamala-harris-trade-better-alternative-not-much

Lovely, M. E. & Yan, J. (2024, August 27). “While the US and China decouple, the EU and China deepen trade dependencies.” PIIE. https://www.piie.com/blogs/realtime-economics/2024/while-us-and-china-decouple-eu-and-china-deepen-trade-dependencies

Inglehart, R., & Norris, P. (2016). “Trump, Brexit, and the rise of populism: Economic

have-nots and cultural backlash.” Harvard Kennedy School Faculty Research Working Paper Series. https://doi.org/10.2139/ssrn.2818659

Mudde, C. (2004). “The populist zeitgeist.” Government and Opposition, 39(4), 541-563. https://doi.org/10.1111/j.1477-7053.2004.00135.x

Szijjártó Péter. (2023). “Magyarország élen jár a digitalizáció területén és ebben a Huaweinek meghatározó szerepe van.” https://www.facebook.com/szijjarto.peter.official/posts/776242160635746?ref=embed_post

Greve, Joan E. (2020, July 29). “Trump says ‘nobody likes me’ when asked about Fauci’s absence – as it happened.” The Guardian. https://www.theguardian.com/us-news/live/2020/jul/28/covid-19-coronavirus-deaths-donald-trump-news-latest

Trump, D. J. [@realDonaldTrump]. (2020, March 16). The United States will be powerfully supporting those industries, like Airlines and others, that are particularly affected by the Chinese Virus [Tweet]. X. https://x.com/realDonaldTrump/status/1239685852093169664

Trump, D. J. [@realDonaldTrump]. (2024, October 21). MAKE AMERICA GREAT AGAIN [Tweet]. X. https://x.com/realDonaldTrump/status/1848133172459970889

Trump, D. J. [@realDonaldTrump]. (2024, October 21). MAKE AMERICA GREAT AGAIN [Tweet]. X. https://x.com/realDonaldTrump/status/1848133172459970889

Woo, S., & O’Keeffe, K. (2018). Washington asks allies to drop Huawei. https://www.wsj.com/articles/washington-asks-allies-to-drop-huawei-154296510

Digital

Hybrid Workshop: Authoritarian Information Manipulation and Dissemination — National, Transnational, and International Perspectives

Date/Venue: November 7-8, 2024 — Deakin Burwood Corporate Centre (BCC)

DOWNLOAD WORKSHOP BOOKLET

 

Click here to register!

 

The emergence of repressive and authoritarian “hybrid regimes” poses one of the most significant threats to democracy today. These regimes and authoritarian actors wield information suppression and manipulation as essential tools to disseminate narratives that erode democratic institutions. This issue transcends national borders; digital technologies now enable authoritarian states to infiltrate robust democracies, allowing them to project their authoritarian narratives globally. The transnationalization of authoritarian politics, facilitated by digital technologies, presents substantial challenges to the integrity of democratic processes and institutions.

In response to these challenges, our workshop aims to investigate how various actors—governments, non-state organizations, state-sponsored entities, and political parties—suppress and manipulate information to erode trust in democratic processes, both domestically and internationally. The workshop will also examine the darker dimensions of social media, focusing on the interactions between misinformation, negativity, and polarization.

The workshop, a collaborative effort organized by the Alfred Deakin Institute for Citizenship and Globalisation (ADI) at Deakin University, Australia, and the European Center for Populism Studies (ECPS) in Brussels, Belgium, will also address strategies to counter misinformation and disinformation, along with intervention techniques to mitigate their impacts. It will focus on countering disinformation through activism and explore everyday online experiences with misinformation, emphasizing the importance of evidence-based media literacy education initiatives. Additionally, the event will discuss necessary curricular reforms to combat disinformation, toxicity, and polarization in educational contexts, as well as the responses of political elites to conspiracy theories.

The organizing team, led by Professor Ihsan Yilmaz, encourages all participants to actively engage in discussions and share insights throughout the workshop. The aim of the workshop, funded by the Australian Political Studies Association (APSA), the Australian Research Council (ARC), and the Gerda Henkel Foundation, is to deepen the understanding of these critical issues and explore collaborative strategies to combat misinformation and disinformation in our increasingly complex digital environment.

For registration:

 

Click here to register!

 

Photo: Nico El-Nino.

International Conference on ‘Digital Complexity and Disinformation in the Indo-Pacific’

DOWNLOAD CONFERENCE BOOKLET

 

Click here to register!

We are pleased to announce that the International Conference on ‘Digital Complexity and Disinformation in the Indo-Pacific’ is scheduled to take place on September 25-26, 2024, both online via Zoom and in person in Melbourne. This significant conference is a collaborative effort organized by the Alfred Deakin Institute for Citizenship and Globalisation (ADI) at Deakin University, Australia; the Department of Communication Sciences at Universitas Indonesia (UI); the Institute of Social Sciences and Humanities at the National Research and Innovation Agency (BRIN); the Faculty of Social and Political Sciences at Universitas Gadjah Mada (UGM); the Faculty of Social and Political Sciences at Universitas Muhammadiyah Malang (UMM); the Department of Political Science at Kulliyyah of Islamic Revealed Knowledge and Human Sciences, International Islamic University Malaysia (IIUM); the State Islamic University (UIN) Salatiga; and the European Center for Populism Studies (ECPS) in Brussels, Belgium.

While digital technologies have revolutionized many aspects of our societies, the promises of inclusivity and progress they bring often do not align with the realities on the ground. These technologies are increasingly being exploited as tools for disinformation, political manipulation, and even digital authoritarianism, posing significant challenges to democratic values and social cohesion.

The Indo-Pacific region is particularly susceptible to these challenges, as disinformation and misinformation spread rapidly across digital platforms such as Facebook, Twitter, Instagram, Telegram, and WhatsApp, exacerbating societal divisions. Moreover, political actors often leverage these platforms to silence criticism, control information flows, and even restrict access to critical digital infrastructure to consolidate their power.

The discussions during this international conference will explore the complex interactions between digital technologies, cyberspace, social media platforms, and political dynamics in the region. Scholars, practitioners, and policymakers from various institutions will offer insights into the impacts of digital disinformation and explore pathways to counter these challenges while promoting digital literacy and inclusivity.

This conference is made possible through the generous support of the Australian Research Council (ARC) via a Discovery Project grant, the Gerda Henkel Foundation, the European Center for Populism Studies (ECPS), and the Alfred Deakin Institute (ADI), all of which are committed to fostering academic inquiry into this pressing global issue. The conference aims to serve as a platform for fruitful discussions and meaningful collaboration, enabling us to better understand digital complexity and its implications for democracy in the Indo-Pacific.

Click here to register!

 

Turkey's President Recep Tayyip Erdogan and Ali Erbas, the head of the Directorate of Religious Affairs (Diyanet) is seen during a public rally in Istanbul on the second anniversary of failed coup attempt on July 15, 2016. Photo: Shutterstock.

Digital Authoritarianism and Religious Populism in Turkey

DOWNLOAD PDF

Please cite as:
Kenes, Bulent & Yilmaz, Ihsan.(2024). “Digital Authoritarianism and Religious Populism in Turkey.” Populism & Politics (P&P). European Center for Populism Studies (ECPS). September 14, 2024. https://doi.org/10.55271/pp0042

 

Abstract

This article explores the interplay between religious populism, religious justification and the systematic attempts to control cyberspace by the Justice and Development Party (AKP) in Turkey. Drawing from an array of scholarly sources, media reports, and legislative developments, the study unravels the multifaceted strategies employed by the ruling AKP to monopolize digital media spaces and control the information published, consumed and shared within these spaces. The narrative navigates the evolution of the AKP’s tactics, spotlighting the fusion of religious discourse with state policies to legitimize stringent control mechanisms within the digital sphere. Emphasizing the entwinement of Islamist populism with digital authoritarianism, the article provides evidence of the strategic utilization of religious platforms, figures, and media outlets to reinforce the narrative of digital authoritarianism as a protector of Islamic values and societal morality. Key focal points include the instrumentalization of state-controlled mosques and religious institutions to propagate government narratives on digital media censorship, alongside the co-option of religious leaders to endorse control policies. The article traces the rise of pro-AKP media entities and the coercive tactics used to stifle dissent, culminating in the domination of digital spaces by government-aligned voices. Furthermore, the analysis elucidates recent legislative endeavors aimed at further tightening the government’s grip on social media platforms, exploring the potential implications for free speech and democratic discourse in the digital realm. 

Keywords: Digital Authoritarianism, Religious Populism, Media Control, Islamism, Digital Governance, Cyberspace, Fatwas, Sermons

 

By Bulent Kenes & Ihsan Yilmaz

Introduction

The rise of religious populism and authoritarianism marks Turkey’s political trajectory under Erdoganism, in which the ruling Justice and Development Party (AKP) has transformed the nation’s governance since 2002. The aftermath of Kemalism brought with it a paradoxical quest for modernization within a less-than-democratic framework. The AKP’s ascent heralded a shift, initially portraying pro-democratic sentiments, but is now defined by authoritarian leanings akin to those of the Kemalist regime. This metamorphosis mirrors global trends that have witnessed authoritarian governance seeping into democratic systems.

The distinctiveness of Erdoganism lies in its merging of Islamist populism into Turkey’s political fabric, fostering electoral authoritarianism, neopatrimonialism, and populism. AKP leader and Turkish President Recep Tayyip Erdogan’s centralized authority converges the Turkish state, society, and governmental institutions, perpetuating a widespread sense of uncertainty, fear, and trust in a strong leader that bolsters authoritarianism. The dynamics of religion, state, and identity construction redefine Turkey’s sociopolitical landscape, with governmental activities aimed at constructing a ‘pious generation’ while diminishing voices of dissent (Yabanci, 2019).

The political landscape in Turkey, particularly under the rule of the AKP, has witnessed a discernible shift marked by increasingly stringent measures against various segments of society. This trend notably encompasses a wide spectrum of individuals, including political opposition factions, minority groups, human rights advocates, academics, journalists, and dissenting voices within civil society (Westendarp, 2021; BBC News, 2020; BBC News, 2017a; Homberg et al., 2017).

Statistics paint a stark picture of the government’s crackdown: alarmingly, more than 150 thousand individuals have faced dismissals from their positions, while over 2 million people have become subjects of “terrorism investigations” following a coup attempt in the country in 2016 (Turkish Minute, 2022). Furthermore, approximately 100 thousand arrests have been documented since the onset of these measures in 2016. The widespread erasure of oppositional or critical voices – real or potential – extends beyond the targeting of individuals and encompasses entire institutions. Academic institutions have borne the brunt of this oppressive regime, resulting in the closure of more than 3 thousand educational establishments, and the dismissal of 6 thousand scholars. The media sector has also suffered a significant blow, with 319 journalists arrested and 189 media outlets forcefully shut down, signaling a profound attack on free speech and the press. The legal profession has also faced targeting, witnessing the loss of 4 and a half thousand legal professionals (Turkey Purge 2019).

Moreover, the AKP’s influence has transcended national borders, impacting Turkish citizens living in diasporas around the world. Instances of extradition of members of the Turkish diaspora on charges related to terrorism or alleged connections to security threats have been reported, highlighting the government’s efforts to exert control beyond its territorial boundaries. This phenomenon has led to the perception of the government as possessing “long arms,” capable of reaching, influencing, and punishing individuals even when living outside the country (Edwards, 2018).

The evolution of Turkey’s digital landscape since 2016 reveals a pronounced shift marked by intensified security protocols and offline repressions. A critical assessment conducted by Freedom House, evaluating global internet freedom between 2016 and 2020, highlights a concerning and tangible decline in internet freedom in Turkey, which significantly intensified following the failed coup attempt in 2016. Notably, the classification of internet freedom as being “not free” underscores the severity of limitations imposed during these years (Daily Sabah, 2021a, 2021b; World Bank, 2021).

Pervasive Online Presence of Turkish Citizens

Despite this lack of freedom, statistics highlight the pervasive influence of the internet within Turkish society (World Bank, 2021). A study from the initial quarter of 2021 indicated that over 80 percent of internet users were consistently active online during these three months, highlighting the integral role the internet plays in the lives of citizens (Daily Sabah, 2021). Data reports tracking internet usage of Turkish citizens suggest that in early 2024, internet penetration in Turkey was at its highest level at 86.5 percent (Kemp, 2024). These findings demonstrate a picture of sustained and pervasive digital engagement within the populace.

Social media findings further underscore the influence of internet usage, revealing an average daily duration of 7 hours and 29 minutes per individual (Bianet, 2020). By January 2024, the number of social media users in the country stands at 57.5 million users, or nearly 70 percent of the total population (Kemp, 2024). Social media platforms, including Facebook, YouTube, WhatsApp, Instagram, TikTok, Snapchat, and Twitter, account for this considerable online presence (Bianet 2020). 

Crucially for this discussion, this digital landscape has become a vital arena for dissenting voices, particularly as traditional media outlets witness declining audience numbers.

Consequently, the internet has emerged as a potent tool for voices of opposition within Turkey. In response to the increased possibilities for these voices in an increasingly online society, the AKP government has initiated various regulatory and surveillance measures aimed at controlling and monitoring the digital sphere, reflecting efforts to suppress dissenting narratives and oppositional voices (Bellut, 2021). Their efforts at digital governance reflect and intensify the government’s broader strategy of curtailing dissent across various levels of society.

The AKP’s Use of Religion to Legitimize a Digital Authoritarian Agenda

The intertwining of religion and state under the AKP’s governance has legitimized and fortified its digital authoritarianism. For example, a recent trend reveals the government’s adept use of Islamic discourse to rationalize the imposition of censorship and crackdowns on online opposition, portraying control over digital technology as a safeguard for Turkish values and moral rectitude. The strategic operationalization of religious values as a legitimizing force for digital authoritarianism is highly indicative of the AKP government’s efforts at consolidating power and suppressing opposition within the online sphere, profoundly shaping the contours of digital discourse and expression in Turkey.

Central to this strategy is the dissemination of Islamic values through state-managed religious institutions, traditional media, and social media platforms, all serving as conduits for aligning public sentiment with the government’s digital autocratic agenda. The propagation of Islamic tenets has been instrumental in molding public opinion to favor the government’s stringent and increasingly authoritarian approach to digital governance. In an effort to increase legitimacy and garner wider support, religious leaders and organizations have been strategically co-opted to support the government’s digital authoritarian agenda.

The cumulative effect of the integration of religion and digital governance has created a pervasive climate of censorship and self-censorship online. Individuals are discouraged from expressing dissenting views or disseminating information that could be perceived as contradictory to religious principles. This climate of caution and apprehension consequently serves to inhibit free expression and discourse within the digital realm, by not only fortifying the government’s authoritarian stance but also influencing the behavioral patterns of online users, curtailing the free flow of information and divergent opinions.

By adopting an interdisciplinary approach encompassing political science, religious studies, media analysis, and socio-political discourse, the paper aims to provide a comprehensive and empirically informed understanding of how religious justification has been systematically employed to legitimize methods of controlling voices of dissent online and foster a pro-AKP narrative in Turkey’s digital governance landscape.

This analysis will contribute to a deeper comprehension of the complex interplay between religion, politics, and digital authoritarianism in contemporary Turkey. This study will highlight how the ruling AKP fuse religion with the state’s digital agenda. It will also demonstrate their reliance on a network of religious platforms, figures, and media to reinforce the narrative of digital authoritarianism as a means of upholding Islamic values and protecting societal morality. The confluence of religious influence and governmental objectives, it will be argued, serves to shape public opinion and garner support for stringent control measures within the digital realm.

Religious Populism of Erdoganism and the AKP’s Authoritarianism

Since the country’s formation in 1923, Turkey has never been perceived as a highly democratic country from the perspective of Western libertarianism. Its initial phase featured a sort of national reconstruction from the worn-out centuries of the Ottoman Empire, which had faced a humiliating defeat at the hands of the Allied forces in World War One (WWI) towards the Republic. The Young Turks, who later became the Kemalists, set the country on a path of reformation with paradoxical ideas of modernization. While the country moved from a centuries-old monarchy to a parliamentary system, it remained far from democratic (Yilmaz’ 2021a). Between 1923 and 1946, Turkey was ruled by the Kemalist Republican People’s Party (CHP) alone. Even following the commencement of multi-party elections, the Turkish political and institutional landscape continued to be dominated by Kemalists until the AKP rose to power. The only exception was a brief period between 1996 and 1997 when Necmettin Erbakan and his right-wing Milli Gorus’s (National View) inspired Welfare Party (RP) held office (Yildiz, 2003). 

The transition from Kemalism to Erdoganism, President Erdogan’s political ideology, was meticulously orchestrated, consolidating the state narrative and silencing opposing voices. The AKP initiated significant constitutional changes, starting with a referendum aimed at removing the Kemalist judiciary from power, and the Ergenekon and Sledgehammer trials which targeted key Kemalist military figures (Kuru, 2012: 51). Although these trials did not conclusively prove the accused’s ‘anti-state’ intentions, they significantly swayed public opinion against Kemalist control of the judiciary and military.

The 2010 Turkish Constitutional Referendum overwhelmingly favored the AKP, seeking increased control over the judiciary and military (Kalaycioglu, 2011). As a result, the outcome expanded parliamentary and presidential authority over appointments to the Constitutional Court and the Supreme Board of Judges and Prosecutors (HSYK), enabling the AKP government to install its own appointees. This marked the end of Kemalist dominance in these institutions and paved the way for AKP influence – and an increasingly authoritarian agenda.

The AKP’s authoritarianism is distinguished from Kemalism by its adept blending of Islamist populism into its political discourse and agenda. While Kemalists championed secularism and Turkish nationalism, Erdoganists espouse an iron-fisted Islamist ideology rooted in the legacy of the Ottoman Empire. This has birthed a new form of autocracy known as “Erdoganism” (Yilmaz & Bashirov, 2018), characterized by four pivotal elements: electoral authoritarianism, neopatrimonialism, populism, and Islamism. (Yilmaz & Turner, 2019; Yilmaz & Bashirov, 2018).

The socio-political landscape of Turkey has experienced a rapid decline, from an initially promising image of democratization to an authoritarian posture of governance with the ascent of AKP in 2002. The AKP’s transition from a seemingly pro-democracy to an authoritarian party has come to resemble the Kemalist tradition of violating democratic freedoms and rights (Yilmaz & Bashirov, 2018). Today, the public presence of the military, arbitrary crackdowns and arrests are now normalized activities of the Turkish state.

Erdogan’s dominant persona has resulted in the centralization of power around his leadership. This was particularly evident following the 2017 Constitutional Referendum, which transitioned the country into a Presidential system. Under this concentration of power, Erdoganism brought about an assimilation of the Turkish nation, state, and its economic, social, and political institutions (Yilmaz & Bashirov, 2018). By positioning himself as a referent object, Erdogan reinforces his grip on power while redefining the contours of Turkish identity, politics, and, as will be developed in this paper, the relationship between religion and the state (Yilmaz, 2000; Yilmaz, 2008, Yilmaz et al., 2021a; Yilmaz & Erturk 2022, 2021; Yilmaz et al., 2021b).

Co-opting of Religious Authorities and the Diyanet to Support AKP’s Authoritarian Agenda

President Erdogan has solidified the politicoreligious ideology of Erdoganism by fostering a close alliance with Turkey’s official religious authority, the Directorate of Religious Affairs (Diyanet). Initially established in 1924 by the Kemalist regime to centralize religious activities and advocate for a ‘secular’ form of Turkish Islam, Diyanet’s role has significantly expanded since the ascent of the AKP and has transformed to accommodate the party’s political Islamist identity.

This relationship is reflected in the increased budget allocation to the religious authority. The Turkish government’s 2023 budget proposal notably elevated Diyanet’s budget by 117 percent (Duvar, 2022). This influenced a substantial increase in funding grants, financial incentives and the heightened prestige of religious leaders and prominent imams. In return, Diyanet extends its loyalty and political support, including aligning with the AKP on digital policy and governance. President Erdogan strategically appoints pro-government religious figures such as Ali Erbas now President of Diyanet, to influential positions. Erbas, recognized for his religious conservatism, has cultivated a close relationship with President Erdogan and endorsed his call for a new Constitution (Martin, 2021).

Erbas’ conspicuous presence in public and political affairs underscores the intimate rapport between him and Erdogan. For instance, during the inauguration of the new Court of Cassation building, attended by President Erdogan, Erbas led a prayer praising its new location (Duvar, 2021). Additionally, Erbas represented President Erdogan at the funeral of Islamic cleric Yusuf al-Qaradawi, a supporter of Erdogan and the Muslim Brotherhood, in 2022 (Nordic Monitor, 2022). The building of ties between members of the government and the religious organization strengthens Diyanet’s role not just as a religious institution but also as a significant political force.

The Erdogan/AKP government has harnessed religious institutions, in particular mosques, to disseminate its positions and policies to the broader public through sermons, religious teachings, and various activities. A content analysis spanning from 2010 to 2021 reveals that Diyanet-run Friday sermons mirror the political stance of the AKP. These sermons were found to support Turkey’s involvement in the Syrian conflict, while vilifying ‘FETOists’—referring to the Gulen movement accused of terrorism. This analysis showcases how Diyanet employs affective religious rhetoric to endorse Erdogan’s decisions, discourage opposition, vilify perceived adversaries, propagate fear and conspiracies, and divert attention from the government’s shortcomings in areas spanning foreign policy, economics, and beyond (Yilmaz & Albayrak, 2022; Rogenhofer & Panievsky, 2020).

The Diyanet, has significantly expanded its media presence since 2010, operating television and radio channels, with an escalating expenditure on publicity. The organization and its leader Erbas also have an active presence and significant following on social media platforms such as YouTube, Twitter, and Facebook (Yilmaz & Albayrak, 2022). This heightened outreach has effectively filled the void created by the purge of groups like the Gulen movement and critical academic voices, both in the digital sphere and beyond (Yilmaz & Albayrak, 2022; Andi et al., 2020; Parkinson et al., 2014).

The close alliance between the Diyanet and the AKP has seen the past two heads of the organization employing faith-based justifications to support Erdogan’s moral campaign against perceived ‘internal’ and ‘external’ adversaries (Andi et al., 2020; Parkinson, et al., 2014). The increasingly stringent control over the digital sphere is justified by Diyanet with Islamic framing and justification. Thus, the emotionally charged narratives instrumentalized by the AKP (Yilmaz & Bashirov, 2018) have become directly intertwined with the religious directives and stances of the Diyanet (Yilmaz & Albayrak, 2022; Yilmaz et al., 2021a; Rogenhofer & Panievsky, 2020). Diyanet extends its influence not only within Turkish territories but also among the Turkish diaspora, functioning as an advisor for the AKP in diaspora communities. Consequently, through the transnational reach of the religious organization, the AKP’s authoritarian agenda has transcended national borders.

The Diyanet’s Moral Stance Against Social Media

Under the Presidential system, the President of Diyanet, appointed by Erdogan, wields significant influence as the centralized religious authority in Turkey and globally through its network of mosques (Danforth, 2020). Former President of Diyanet, Mehmet Gormez, openly criticized social media, attributing various societal harms to it. In 2016, Diyanet organized a forum titled “Social Media and the Family in the Context of Privacy,” aligning with the government’s calls for social media control. The forum aimed to emphasize traditional family values and discuss the perceived negative impact of social media on privacy and marriage. Gormez advocated for Diyanet to create a social media catechism, reinforcing the ideological harmony between Diyanet and Erdogan’s regime, consolidating authoritarianism both online and offline (Yilmaz & Albayrak, 2022; Yilmaz et al., 2021a; Danforth, 2020).

Diyanet has also actively engaged in efforts to exert stronger control over social media by publishing a booklet titled “Social Media Ethics,” using Islam as a guiding principle for this framework (Duvar, 2021). In the preface he personally authored, top imam Ali Erbas cautioned readers about the omnipotent governance of God extending to social media activities under Islamic law. Additionally, believers were alerted to the perils of “fake news” and urged to create a “world of truth” (Duvar, 2021; Turkish Minute, 2021).

Moreover, Diyanet’s Friday sermons have increasingly addressed themes related to social media, technology, and morality. On January 17, 2020, a sermon titled ‘Technology Addiction and Social Media Ethics’ was circulated by Diyanet, cautioning people about the dangers of the Internet violating the five fundamental values of Islam. It highlighted that the indiscriminate use of technology poses threats to human health, causes financial losses, erodes human dignity through unethical behaviors, undermines human faith with radical ideologies, and impairs cognitive abilities (Diyanet, 2020).

The Role of Islamic Scholars in Legitimizing the AKP Digital Authoritarian Agenda

Within academia, several pro-AKP Islamic scholars have aligned themselves with the government’s digital authoritarian agenda. Figures like Nihat Hatipoglu and Hayrettin Karaman (Kenes, 2018), associated with the AKP, believe that social media spreads misinformation targeting Turkish national interests and could mislead youth. Since 2016, Karaman, who has advised Erdogan on creating a more Islamist – and less tolerant – society has frequently accused social media of being used by “anti-Turkey” groups to spread lies (Yeni Safak, 2013). He highlights the dangers of false information being spread on these platforms, claiming that there’s no room for rebuttal (Yeni Safak, 2021). A poem written by Karaman supports AKP’s stance on social media, advocating for increased control to cultivate a “pious youth” and suppress critical remarks aimed at the AKP (Yeni Safak, 2020).

Nihat Hatipoglu, a prominent pro-AKP Turkish academic and theologian, has utilized his ATV show to issue fatwas, cautioning viewers about the potential sins associated with social media usage. For instance, he warns that engaging with “questionable” individuals on these platforms can lead to false rumors and sin, and accountability will come in the afterlife (Akyol, 2016). His messaging is potent in digital governance because it moves beyond conventional vices like alcohol or adultery and highlights the significance of sins associated with online behaviors and consumption, such as false testimonies and envy.

Furthermore, both Karaman and Hatipoglu are openly critical of “Western” media and social platforms, and advocate for Islamic content. Together, they represent a prevalent viewpoint supporting AKP discourse that emphasizes caution and adherence to Islamic principles while engaging with digital platforms.

Digital Authoritarian Measures Against the LQBTQ+ Community

The intersection of religion, politics, and social media in Turkey has also created a complex landscape where certain communities, particularly LGBTQ+ groups, have faced significant challenges. Religious leaders and government officials have used their platforms to vilify LGBTQ+ activists and communities, contributing to a hostile environment for these individuals (Greenhalgh, 2020).

This hostility has significantly deepened with anti-LQBTQ+ messaging from Turkish leadership. President Erdogan’s agenda has consistently focused on promoting a “pious youth” while openly expressing disapproval of atheists and LGBTQ+ identities as threats to societal and religious values (Gall, 2018). His party has employed rhetoric targeting Western values and certain youth groups, framing them as corruptive influences on Turkey’s future.

Although identifying as LGBTQ+ is not illegal in Turkey, the government has taken steps to restrict LGBTQ+ content and activism online (Woodward, 2019). This included censoring LGBTQ+ content on platforms like TikTok and imposing restrictions on advertising across social media channels to suppress opposition groups (Euronews, 2021).

Moreover, there have been instances of attempts to ban LGBTQ+ content, such as Netflix being prohibited from airing a movie with an LGBTQ+ storyline, and the mobilization of hashtags advocating for bans on LGBTQ+ content such as #LGBTfilmgunleriyasaklansin (#BanLGBTFilmDays); #İstiklalimizeKaraLeke (#StainOnOurIndependence) (Banka, 2020; Sari, 2018). These actions reflect the charged anti-LGBTQ+ sentiment prevalent in certain spheres of Turkish society and the state’s efforts to curtail LGBTQ+ visibility in the media and online discourse.

Government efforts at controlling and silencing LGBTQ+ members have clear repercussions in society. For example, influencing the demonization of LGBTQ+ youth during the Bogazici University protests in 2021 and subsequent limitations on LGBTQ+ content across various platforms (Kucukgocmen, 2021; Woodward, 2019; Euronews, 2021).

AKP’s Digital Network Control, Restrictions, and Bans

The Gezi Park protests in 2013 marked a turning point for the Turkish government’s efforts at controlling the digital landscape. During this period, civil society groups and activists turned to social media to coordinate the protests, prompting the government to denounce Twitter as a significant threat to society. Internet governance subsequently tightened, and internet blackouts were orchestrated by the newly established Telecommunication Technologies Authority (BTK) under government directives. While the government justified these internet restrictions as anti-terrorism measures, their political motives were evident. 

The pinnacle of Turkish government internet shutdowns occurred between 2015 and 2017. This was facilitated by Internet Law No. 5651, introduced in 2007, permitting website blocking on multiple grounds, including for terrorism-related content. The broadened definition of “terrorism” that had been enacted by the Erdogan regime was manipulated to silence dissenting voices and serve the interests of the ruling power. Gradually, the scope of a “terrorist” in Turkey expanded to encompass peaceful protesters from events like the Gezi Park protests, anti-government activists labelled as “FETOists,” and students involved in activism during Istanbul’s Bogazici University events in 2021 (Wilks, 2021; Yesil et al., 2017).

Internet Law 5651 thus became a tool to marginalize digital spaces for non-AKP or critical groups, using the power of the TIB (Telecommunication and Information Technology Authority) and imposing additional responsibilities on hosting services and intermediaries. The 2014 amendment to the Law on State Intelligence Services granted the National Intelligence Service (MIT) authority to gather, record, and analyze public and private data, compelling intermediaries to comply with MIT’s requests under the threat of incarceration (Human Rights Watch, 2014).

The eastern regions of Turkey, particularly areas with strong Kurdish resistance, bore the brunt of internet and cellular shutdowns during critical events like the 2015 Suruc suicide bombing and the 2016 Ataturk Airport bombing. These shutdowns were often localized and imposed during high-risk security incidents. The government’s increasingly authoritarian approach leveraged digital anti-terrorism laws to target marginalized groups, particularly the Kurds. It is noteworthy that most shutdowns occurred in the southeast, where political activities are more prevalent. For instance, the 2016 closure of internet and landlines in 11 cities following the arrests of Diyarbakir’s mayor and co-mayor sparked protests and incurred significant economic costs for Turkey (Yackley, 2016).

Although internet shutdowns decreased from six in 2016 to one in 2020, the financial toll remains substantial, reaching $51 million in 2020 (Buchholz, 2021). While the precise role of religious justification and religious organizations in legitimizing comprehensive network governance remains unclear, their collaboration remains crucial to the government. It also plays a significant role in legitimizing various forms of digital governance and actions taken by the government – such as these internet shutdowns – that undermine democratic and digital freedom principles.

Digital Oppression Through the ‘Safe Use of the Internet’ Campaign

The 2011 “Safe Use of the Internet” campaign initiated by the Telecommunication Technologies Authority (BTK) promoted a Turkish-built filter called the ‘family filter.’ However, despite its name, the campaign primarily focused on regulating internet access in public spaces like cafes and libraries, rather than imposing ‘safe’ restrictions within domestic settings. The campaign purported to protect children from accessing non-age-appropriate content by blocking adult websites, both foreign and domestic. Interestingly, this campaign didn’t enforce mandatory installation of the ‘family filter’ at home, seemingly placing the responsibility on parents to supervise their children’s internet use. Discussions about children’s privacy were also notably absent from the campaign despite the stated objective (Hurriyet Daily News, 2014; Brunwasser, 2011).

Over time, concerns have emerged regarding the broader implications of the ‘family filter.’ Many speculate that this initiative, while supposedly aimed at blocking pornographic content, also serves as a tool for the state to censor critical voices within the digital space (Yesil et al., 2017). The criteria for blacklisting websites remain ambiguous, granting significant power to state authorities. By 2017, approximately 1.5 million websites had been blocked, particularly in public areas like cafes. The BTK has concerningly refrained from disclosing the list of websites it restricts (Yesil et al., 2017). The lack of transparency has contributed to concerns about digital oppression and censorship orchestrated by the AKP through the guise of protecting children and youth online. 

AKP’s Digital Authoritarianism: Sub-Network, Website and Platform Level

The Internet Law (No. 5651) described above has facilitated the monitoring and blocking of webpages and websites in Turkey. Despite amendments, the law remains problematic due to its arbitrary and vague provisions. Internet governance institutions hold broad discretion in determining acceptable versus unacceptable content. According to Freedom House’s latest report, internet freedoms in Turkey have been increasingly restricted in recent years (Freedom House, 2021). 

In 2006, prior to the introduction of the Internet Law, only four websites were blocked in Turkey. However, by 2008, this number had escalated to 1,014, reaching a staggering 27,812 in 2015. Government decisions using this law lack transparency and accountability, as blocking orders, often issued by the BTK, lack clear justifications, leaving website owners with limited recourse for appeal. Suspicion and precautionary measures are sometimes the sole reasons cited for blocking a website.

Following the 2016 coup attempt in Turkey, websites related to the Gulen movement, Gezi Protests, corruption allegations, and terrorism charges were blocked or taken down (Ergun, 2018). Government actions also targeted websites advocating opposition, Kurdish rights, LGBTQ+ rights, and pornography. Several news outlets, including Zaman and Today’s Zaman, were shut down in 2016. Websites promoting atheism, such as the Atheism Association, were also blocked under Article 216 of the Turkish Penal Law, which prohibits actions inciting hatred or enmity among people (Hurriyet Daily News, 2015).

Digital Control at the Proxy or Corporation Level

The politicization and framing of the July 2016 events by Erdogan and the AKP as an assault on Turkish sovereignty triggered severe digital restrictions. The disbandment of the TIB over alleged pro-Gulenist ties led to the transfer of its powers to the Information and Communication Technologies Authority (BTK). Consequently, approximately 150 online and traditional media outlets were completely shut down, resulting in the loss of jobs for 2,700 Turkish journalists (Kocer & Bozdag, 2020). The legal framework governing digital spaces in Turkey has been wielded against opposition and civil society voices while favoring AKP and pro-AKP groups.

Social media intermediaries operating in Turkey have faced various restrictions. According to the Internet Law, they are required to comply with the Turkish government’s requests or face bans. During a period of heightened discontent against the AKP in 2014, the TIB pressured Twitter, YouTube, and Facebook to remove critical content damaging to the ruling party. While Facebook swiftly complied, Twitter and YouTube faced national blockades for several hours before eventually complying with the requests (Yesil et al., 2017). In 2016, Google also adhered to thousands of content removal requests from the Turkish state (Yesil et al., 2017).

The 2019 Transparency Reports from Twitter and Facebook shed light on Turkey’s extensive governmental demands for information and content removals. Twitter was issued with 350 information requests involving 596 accounts, and 6,073 removal requests affecting 8,993 accounts. The report indicated a compliance rate of 5 percent. Turkey was number one on the list for the highest number of legal demands for removals. Meanwhile, Facebook received 2,060 legal requests and 2,537 user information requests, complying with 73 percent of these requests (Freedom House, 2021).

Adding to this overall picture of digital surveillance and control, Turkey has imposed bans on approximately 450,000 domains, 140,000 URLs, and 42,000 tweets (Timuçin, 2021). IFOD announced on August 7, 2024, that by the end of the first quarter of that year, a total of 1,043,312 websites and domain names had been blocked in Turkey, based on 892,951 decisions from 833 different institutions and courts. The organization highlighted that this number could rise as more domain names are identified (IFOD, 2024). Furthermore, in 2017, Wikipedia was banned in Turkey following a ruling from Ankara’s first Criminal Court, linking certain articles to terror organizations. The court mandated edits to the articles before allowing the website to resume being accessible in the country in 2020 (Hurriyet Daily News, 2020; The Guardian, 2017).

The Turkish government’s manipulation of news and entertainment content distribution is a well-documented strategy, implemented through its control over media outlets both locally and internationally. Beyond influencing social media and restricting local websites, additional methods of control are exercised over television, streaming and various over-the-top media services (OTTs). In 2019, the government empowered the Radio and Television Supreme Council (RTUK) to issue licenses and make them mandatory to access content streaming in Turkey (Pearce, 2019; Yerlikaya, 2019).

The Turkish government has also employed various financial penalties, including fines and heavy taxes, to curb critical voices and hinder their independent operations. These tactics have forced many critical media outlets out of business, enabling pro-government entities to acquire their assets. For instance, the pro-government Demiroren Group acquired the Dogan Media Group following high taxes imposed by the government. Anadolu Ajansi (AA), enjoying government support, has significantly increased its backing for the AKP government by 545 percent since 2002, with 91.1 percent of its Twitter coverage found to favor the government. The government’s informal means of bolstering pro-government content include shutting down anti-government entities and transferring or selling their outlets or platforms to pro-government supporters, establishing a clientelist relationship between the state and media (Yilmaz & Bashirov, 2018). For example, during the state of emergency in 2016, the Gulen-linked Samanyolu Group, Koza Ipek Group, and Feza Publications were seized and redistributed to President Erdogan’s loyalists (Timucin, 2021; BBC News, 2016; Yackley, 2016).

Digital Authoritarianism at the Network-Node or Individual Level

The Turkish government has intensified its crackdown on individual social media and online activities, particularly following the 2016 coup attempt. The Ministry of Interior, for example, reported investigations on over ten thousand individuals for their online engagements, resulting in legal action against over 3,700 and the arrest of more than 1,600 people. Within a two-month span between January and March 2018, over 6,000 social media accounts were probed, leading to legal consequences for over 2,000 individuals. Freedom House’s 2021 assessment further revealed that between 2013 and 2018, the government initiated over 20,000 legal cases against citizens due to their social media activities (Ergun, 2018).

A climate of self-censorship among Turkish internet users has become entrenched. This is owing to multiple actions and crackdowns taken by the government in recent years. Following the coup attempt, for example, academics and civil society voices were targeted by pro-AKP media outlets that alleged their involvement in “terrorism” (GIT North America, 2016). Journalists have faced a diminished space to express dissenting opinions and face being accused of or charged with terrorism under various legal articles, including Article 314/2, related to association with armed organizations, and Article 147 and Article 5, concerning crimes associated with terrorist intent and groups (Sahinkaya, 2021). The restriction of anti-AKP voices has heavily tilted mainstream conversation in favor of pro-AKP narratives, dominating both online and offline domains.

The Turkish government actively suppresses dissent on social media, resorting to threats and arrests against individuals. In a 2014 incident, a Turkish court ordered Facebook to block pages and individuals engaging with content from Charlie Hebdo, a French magazine that published a cartoon insulting Prophet Muhammad (Johnston, 2015). The Director of Communications of the Presidency warned citizens in May 2020 that even liking or sharing a post deemed unacceptable by the government could lead to trouble. Journalists, scholars, opposition figures, and civil society leaders critical of the government are increasingly vulnerable to prosecution.

The AKP’s influence in the digital public sphere is also notable in its internet trolling and online harassment campaigns, which are aimed at shaping narratives in favor of the party and against the opposition. Critics of the AKP, including journalists, academics, and artists, face a culture of “digital lynching and censorship” perpetrated by an army of party-affiliated trolls (Bulut & Yoruk, 2017). Post-2016, this situation has worsened, subjecting critical voices to intensified cyberbullying and making their persecution more challenging (Shearlaw, 2016). Many of these trolls are graduates of pro-AKP Imam Hatip schools and reportedly receive a payment. Successful trolls likely receive additional benefits from pro-AKP networks, including the TRT and Turkcell (Bulut & Yoruk, 2017). In addition to employing trolls, the AKP also uses automated bots to amplify its presence in the digital space, disproportionately projecting their narrative across platforms (Irak & Ozturk, 2018). 

The manipulation of social media platforms across the globe has become a significant concern, and this is particularly the case in Turkey. In 2020, Twitter’s deletion of a substantial number of accounts from China, Russia, and Turkey revealed the extent of propaganda spread by these accounts. Many were focused on supporting President Erdogan, attacking opposition parties, and advocating for undemocratic reforms (Twitter Safety, 2020). The proliferation of fake accounts and bots, and the significant portion of posts originating from these accounts, has skewed the representation of daily Twitter (renamed as X) trends, and consequently affected political discourse.

Disturbingly, instances of online harassment and hate speech targeting individuals based on their political stance or ethnic background have been observed without effective intervention. For instance, Garo Paylan, an HDP deputy with Turkish-Armenian heritage, faced online harassment for his political stance during the Azerbaijan-Armenian skirmish in 2020 (Briar, 2020). Meanwhile, controversial statements, such as Ibrahim Karagul’s suggestion of ‘accidentally’ bombing Armenians, didn’t receive the same scrutiny for hate speech (Barsoumian, 2020). 

Conclusion

The merging of religion and the state’s digital authoritarian agenda serves as a potent tool for steering public opinion, validating control mechanisms, and fortifying the government’s authority. It exemplifies how the discourse of upholding Islamic values and societal morality can be strategically harnessed to garner support for stringent digital control measures, influencing public perception and behavior within the digital landscape. 

This article identifies numerous ways the AKP and its leader, administer their authority over the digital realm in Turkey. Voices of dissent and opposition are silenced through the enactment of a range of legislative and strategic measures, such as Internet Law No.5651, the “Safe Use of the Internet” campaign, and online trolling and harassment practices that directly target critics of the government. Additionally, the AKP make considerable attempts at controlling the online content its citizenry can or want to access; the discussion highlights the internet lockdowns, blacklisting of websites, and issuing warnings to Turkish citizens of the consequences of engaging with certain (oppositional) content.

The above measures are supported and legitimized by the AKP and Erdogan’s religious discourse, and through its network of pro-AKP religious authorities including the Diyanet, Islamic scholars and preachers. By aligning digital control measures with Islamic values and societal morality, the government can justify its actions as essential for preserving the ethical fabric of society. This moral grounding lends an air of legitimacy and righteousness to measures that might otherwise be viewed as intrusive or oppressive.

The fusion of religious rhetoric with digital governance acts as a deterrent to dissent. The government discourages dissenting voices by associating opposition to these measures with a departure from religious principles, fostering a climate of self-censorship and compliance within the digital sphere.

Religious institutions, particularly Diyanet, are heavily influential in conversations about social media ethics and endorsing greater control over digital spaces, leading to an Islamization of digital spaces. Strict limitations on blasphemy and criticism of Islamic beliefs curtail freedom of expression online.

Ultimately, the combination of information and content control, legal measures, religious influence, and online manipulation creates a challenging scenario for digital governance in Turkey. These various elements work together to shape narratives, control dissent, create a pervasive environment of censorship and self-censorship, and restrict freedoms in the digital realm, impacting the country’s broader socio-political landscape.


References

— (2014). “Turkey: Spy Agency Law Opens Door to Abuse.” Human Rights Watch (HRW). April 29, 2014. https://www.hrw.org/news/2014/04/29/turkey-spy-agency-law-opens-door-abuse (accessed on August 17, 2024).

— (2014). “’Turkey’s new Internet law is the first step toward surveillance society,’ says cyberlaw expert.” Hurriyet Daily News. February. 24, 2014 https://www.hurriyetdailynews.com/turkeys-new-internet-law-is-the-first-step-toward-surveillance-society-says-cyberlaw-expert-62815 (accessed on August 27, 2024).

— (2015). “Turkey Blocks Website of Its First Atheist Association.” Hurriyet Daily News. March 4, 2015. https://www.hurriyetdailynews.com/turkey-blocks-website-of-its-first-atheist-association-79163 (accessed on August 22, 2024).

— (2016). “Zaman newspaper: Seized Turkish daily ‘now pro-government’.” BBC News. March 6, 2016. https://www.bbc.com/news/world-europe-35739547 (accessed on August 18, 2024).

— (2017). “Turkey blocks Wikipedia under law designed to protect national security.” The Guardian. April 30, 2017. https://www.theguardian.com/world/2017/apr/29/turkey-blocks-wikipedia-under-law-designed-to-protect-national-security (accessed on August 21, 2024).

— (2017). “Turkey reverses female army officers’ headscarf ban.” BBC News. February 22, 2017. https://www.bbc.com/news/world-europe-39053064 (accessed on August 18, 2024).

— (2017a). “Turkey arrests 1,000 in raids targeting Gulen suspects.” BBC News. April 26, 2017.  https://www.bbc.com/news/world-europe-39716631 (accessed on August 18, 2024).

— (2019). “Turkey’s post-coup crackdown.” Turkey Purgehttps://turkeypurge.com/

— (2020). “Disclosing networks of state-linked information operations we’ve removed.” Twitter Safety. June 12, 2020.  https://blog.twitter.com/en_us/topics/company/2020/information-operations-june-2020 (accessed on October 21, 2023).

— (2020). “Turkey court jails hundreds for life for 2016 coup plot against Erdogan.” BBC News. November 26, 2020. https://www.bbc.com/news/world-europe-55083955 (accessed on August 18, 2024).

— (2020). “Wikipedia ban lifted after top court ruling issued.” Hurriyet Daily News. January 15, 2020.https://www.hurriyetdailynews.com/wikipedia-ban-lifted-after-top-court-ruling-issued-150993 (accessed on August 12, 2024).

— (2020). “There are 54 million social media users in Turkey.” Bianet. July 2, 2020.  https://m.bianet.org/english/society/226764-there-are-54-million-social-media-users-in-turkey (accessed on August 17, 2024).

— (2020). “Technology Addiction and Social Media Ethics.” Diyanet. January 17, 2020.  https://dinhizmetleri.diyanet.gov.tr/Documents/Technology%20Addiction%20and%20Social%20Media%20Ethics.doc

— (2020). “Turkey.” Freedom House. https://freedomhouse.org/country/turkey/freedom-net/2020 (accessed on August 14, 2024).

— (2021). “Individuals using the Internet (% of population).” World Bank Data Bank.  https://data.worldbank.org/indicator/IT.NET.USER.ZS?locations=TR (accessed on August 15, 2024).

— (2021). “Turkey’s top imam calls for social media regulation amid discussions on new gov’t body.” Turkish Minute.September 7, 2021. https://www.turkishminute.com/2021/09/07/rkeys-top-imam-calls-for-social-media-regulation-amid-discussions-on-new-government-body/ (accessed on October 21, 2023).

— (2021). “Turkey: Events of 2020.” Human Rights Watch (HRW).  https://www.hrw.org/world-report/2021/country-chapters/turkey#8e519f (accessed on August 17, 2024).

— (2021). “Turkey bans advertising on Twitter and Pinterest under social media law.” Euronews. January 19, 2021. https://www.euronews.com/2021/01/19/turkey-bans-advertising-on-twitter-and-pinterest-under-social-media-law (accessed on August 14, 2024).

— (2021). “Turkish religious authority recommends Islamic jurisprudence to curb social media in new book.” Duvar.September 7, 2021. https://www.duvarenglish.com/turkish-religious-authority-recommends-islamic-jurisprudence-to-curb-social-media-in-new-book-news-58737 (accessed on August 12, 2024).

— (2021). “Turkish Parliament to create ‘digital map’ of country.” Daily Sabah. May 24, 2021. https://www.dailysabah.com/politics/legislation/turkish-parliament-to-create-digital-map-of-country (accessed on August 23, 2024).

— (2021a). “Turkey to open social media directorate.” Daily Sabah. August 23, 2021. https://www.dailysabah.com/politics/legislation/turkey-to-open-social-media-directorate (accessed on August 23, 2024).

— (2021b). “Rate of internet users in Turkey rises to 82.6%.” Daily Sabah. August 26, 2021. https://www.dailysabah.com/turkey/rate-of-internet-users-in-turkey-rises-to-826/news (accessed on August 23, 2024).

— (2022). “More than 2 million terrorism investigations launched in Turkey following failed coup: official data.” Turkish Minute. December 13, 2022. https://www.turkishminute.com/2022/12/13/e-than-2-million-terrorism-investigations-launched-in-turkey-following-failed-coup-official-data/ (accessed on August 29, 2024).

— (2022). “Erdoğan sent his chief imam to the funeral of cleric who endorsed suicide bombings.” Nordic Monitor.October 3, 2022. https://nordicmonitor.com/2022/09/erdogan-sent-his-chief-imam-to-the-funeral-of-cleric-who-endorsed-suicide-bombings/ (accessed on August 7, 2024).

— (2024). “1 Milyondan Fazla Alan Adı Erişime Engelli.” IFOD. August 7, 2024. https://ifade.org.tr/engelliweb/1-milyondan-fazla-alan-adi-erisime-engelli/ (accessed on August 9, 2024). 

Akyol, A.R. (2016). “Iranian film about Prophet Muhammad causes stir in Turkey”. Al Monitor. November 15, 2016. https://www.al-monitor.com/originals/2016/11/turkey-iranian-movie-about-prophet-causes-stir.html#ixzz7BczW8uvz (accessed on August 19, 2024).

Andi, S.S.; Erdem A. & Carkoglu, A. (2020). “Internet and social media use and political knowledge: Evidence from Turkey.” Mediterranean Politics (Frank Cass & Co.)25(5), 579–599. https://doi.org/10.1080/13629395.2019.1635816

Banka, N. (2019). “Why Netflix cancelled a Turkish drama after row over an LGBTQ character.” Indian Express. July 22, 2019. https://indianexpress.com/article/explained/explained-why-netflix-cancelled-a-turkish-drama-after-row-over-an-lgbtq-character-6518401/# (accessed on August 11, 2024). 

Barsoumian, Nanore. (2020). “Sharp Rise in Hate Speech Threatens Turkey’s Armenians.” Armenian Weekly. October 8, 2020.  https://armenianweekly.com/2020/10/08/sharp-rise-in-hate-speech-threatens-turkeys-armenians/ (accessed on August 18, 2024). 

Bellut, D. (2021). “Turkish government increases pressure on social media.” DW. September 9, 2021. https://www.dw.com/en/turkish-government-increases-pressure-on-social-media/a-59134848 (accessed on August 13, 2024).

Briar, N. (2020). “Negligent Social Media Platforms Breeding Grounds for Turkish Nationalism, Hate Speech.” The Armenian Weekly. October 29, 2020. 2020https://armenianweekly.com/2020/10/29/negligent-social-media-platforms-breeding-grounds-for-turkish-nationalism-hate-speech/ (accessed on August 23, 2024).

Brunwasser, M. (2011). “Turkish Internet filter to block free access to information.” DW. January 6, 2011. https://www.dw.com/en/turkish-internet-filter-to-block-free-access-to-information/a-15123910 (accessed on August 30, 2024).

Buchholz, K. (2021). “The Cost of Internet Shutdowns.” Statista. January 6, 2021. https://www.statista.com/chart/23864/estimated-cost-of-internet-shutdowns-by-country/ (accessed on August 28, 2024).

Bulut, E. & Yoruk, E. 2017. “Digital Populism: Trolls and Political Polarization of Twitter in Turkey.” International Journal of Communication. 11, 4093–4117.

Conditt, Jessica. (2016). “Turkey shuts off internet service in 11 Kurdish cities.” Engadget. October 27, 2016. https://www.engadget.com/2016-10-27-turkey-internet-shutdown-kurdish-cities.html (accessed on August 22, 2024).

Danforth, N. (2020). “The Outlook for Turkish Democracy: 2023 and Beyond.” Washington Institute for Near East Policyhttps://www.washingtoninstitute.org/media/632 (accessed on August 12, 2024).

Colborne, Michael & Edwards, Maxim. (2018). “Erdogan’s Long Arm: The Turkish Dissidents Kidnapped from Europe.” Haaretz. August 30, 2018. https://www.haaretz.com/middle-east-news/turkey/.premium-erdogan-s-long-arm-the-turkish-nationals-kidnapped-from-europe-1.6428298 (accessed on August 13, 2024).

Elmas, Tugrulcan. (2023). “Analyzing Activity and Suspension Patterns of Twitter Bots Attacking Turkish Twitter Trends by a Longitudinal Dataset.” ArXiv. April 16, 2023. https://arxiv.org/abs/2304.07907 (accessed on August 23, 2024).

Ergun, D. (2018). “National Security vs. Online Rights and Freedoms in Turkey: Moving Beyond the Dichotomy.” Centre for Economics and Foreign Policy Studies (EDAM). https://edam.org.tr/wp-content/uploads/2018/04/TurkeyNational-Security-vs-Online-Rights.pdf

Gall, C. (2018). “Erdogan’s Plan to Raise a ‘Pious Generation’ Divides Parents in Turkey.” The New York Times. June 18, 2018. https://www.nytimes.com/2018/06/18/world/europe/erdogan-turkey-election-religious-schools.html (accessed on August 3, 2024).

GIT North America. (2016). “Pro-AKP Media Figures Continue to Target Academics for Peace.” Jadaliyya. April 27, 2016. https://www.jadaliyya.com/Details/33208 (accessed on August 23, 2024).

Greenhalgh, Hugo. (2020). “Hundreds of global faith leaders call for ban on LGBT+ conversion therapy.” Reuters.December 16, 2020. https://www.reuters.com/article/world/hundreds-of-global-faith-leaders-call-for-ban-on-lgbt-conversion-therapy-idUSKBN28Q00T/ (accessed on August 22, 2024).

Homberg, J.; Said-Moorhouse, L. & Fox, K. (2017). “47,155 arrests: Turkey’s post-coup crackdown by the numbers.” CNN. April 15, 2017. https://edition.cnn.com/2017/04/14/europe/turkey-failed-coup-arrests-detained/index.html (accessed on August 27, 2024).

Irak, D. & Ozturk, A. E. (2018). “Redefinition of State Apparatuses: AKP’s Formal-Informal Networks in the Online Realm.” Journal of Balkan and Near Eastern Studies20(5), 439–458. https://doi.org/10.1080/19448953.2018.1385935

Johnston, C. (2015). “Facebook Blocks Turkish Page That ‘Insults Prophet Muhammad’.” The Guardian. January 27, 2015. https://www.theguardian.com/technology/2015/jan/27/facebook-blocks-turkish-page-insults-prophet-muhammad (accessed on August 27, 2024).

Kalaycioglu, E. (2012). “Kulturkampf in Turkey: The Constitutional Referendum of 12 September 2010.” South European Society & Politics17(1), 1–22. https://doi.org/10.1080/13608746.2011.600555

Karaman, Hayreddin. (2013). “Çoğunluğu kale almamak.” Yeni Safak. November 7, 2013. https://www.yenisafak.com/yazarlar/hayrettin-karaman/cogunlugu-kale-almamak-40566 (accessed on September 1, 2024). 

Karaman, Hayreddin. (2020). “İnternet deryası ve pazarı.” Yeni Safak. October 25, 2020. https://www.yenisafak.com/yazarlar/hayrettin-karaman/internet-deryasi-ve-pazari-2056600 (accessed on September 1, 2024).

Karaman, Hayreddin. (2021). “Sosyal medya yalanları.” Yeni Safak. October 10, 2021. https://www.yenisafak.com/yazarlar/hayrettin-karaman/sosyal-medya-yalanlari-2059831 (accessed on September 1, 2024).

Kemp, S. (2024). “Digital 2024: Turkey.” Datareportal. February 23, 2024. https://datareportal.com/reports/digital-2024-turkey (accessed on August 18, 2024).

Kenes, Bulent. (2018). “Instrumentalization of Islam: Hayrettin Karaman’s Role in Erdogan’s Despotism.” Politurco.May 30, 2018. https://politurco.com/instrumentalization-of-islam-hayrettin-karamans-role-in-erdogans-despotism.html (accessed on September 1, 2024).

Kocer, S. & Bozdag, C. (2020). “News-Sharing Repertoires on Social Media in the Context of Networked Authoritarianism: The Case of Turkey.” International Journal of Communication. 14: 5292-5310. https://ijoc.org/index.php/ijoc/article/view/13134

Kuru, A.T. (2012). “The Rise and Fall of Military Tutelage in Turkey: Fears of Islamism, Kurdism, and Communism.” Insight Turkey. 14(2): 37–57.

Kucukgocmen, A. (2021). “Twitter labels Turkish minister’s LGBT post hateful as students protest.” Reuters. February 2, 2021. https://www.reuters.com/article/turkey-security-bogazici-int-idUSKBN2A21C1 (accessed on August 21, 2024).

Martin, Jose Maria. (2021). “Turkey’s judiciary calls on Ali Erbas to tone it down.” Atalayar. September 10, 2021. https://www.atalayar.com/en/articulo/politics/turkeys-judiciary-calls-ali-erbas-tone-it-down/20210909133027152860.html (accessed on August 19, 2024).

Parkinson, J.; Schechner, S. & Peker, E. (2014). “Turkey’s Erdogan: One of the World’s Most Determined Internet Censors.” The Wall Street Journal. May 2, 2014. https://www.wsj.com/articles/SB10001424052702304626304579505912518706936 (accessed on August 12, 2024).

Pearce, J. (2019). “Turkey to introduce new regulations for OTT services.” IBC. August 6, 2019. https://www.ibc.org/trends/turkey-to-introduce-new-regulations-for-ott-services/4239.article (accessed on August 16, 2024).

Rogenhofer, J. M. & Panievsky, A. (2020). “Antidemocratic populism in power: comparing Erdoğan’s Turkey with Modi’s India and Netanyahu’s Israel.” Democratization27(8), 1394–1412. https://doi.org/10.1080/13510347.2020.1795135

Sahinkaya, E. (2021). “Erdogan Says Media Are ‘Incomparably Free,’ But Turkish Journalists Disagree.” VoA. October 13, 2021. https://www.voanews.com/a/turkey-erdogan-press-freedom/6269435.html (accessed on August 11, 2024).

Shearlaw, M. (2016). “Turkish journalists face abuse and threats online as trolls step up attacks.” The Guardian. November 1, 2016. https://www.theguardian.com/world/2016/nov/01/turkish-journalists-face-abuse-threats-online-trolls-attacks (accessed on August 1, 2024).

Timucin, Fatma. (2021). 8-Bit Iron Fist: Digital Authoritarianism in Competitive Authoritarian Regimes: The Cases of Turkey and Hungary. (Master’s thesis, Sabanci University, Istanbul, Turkey). https://research.sabanciuniv.edu/42417/(accessed on August 21, 2024).

Westendarp, L. (2021). “Turkey orders arrests in latest crackdown on Gülen network.” Politico. October 19, 2021. https://www.politico.eu/article/turkey-arrest-crackdown-fethullah-gulen-network/ (accessed on August 11, 2024).

Wilks, A. (2021). “Turkey’s student protests: New challenge for Erdogan.” Al Jazeera February 6, 2021. https://www.aljazeera.com/news/2021/2/6/turkeys-student-protests-new-challenge-for-erdogan (accessed on August 15, 2024).

Woodward, A. (2019). “Documents Reveal That TikTok Once Banned LGBTQ, Anti-Government Content in Turkey.” Forbes. October 2, 2019. https://www.forbes.com/sites/annabellewoodward1/2019/10/02/documents-reveal-that-tiktok-once-banned-lgbtq-anti-government-content-in-turkey/?sh=5145d5c81d7e (accessed on August 15, 2024).

Yabanci, B. (2019). Work for the Nation, Obey the State, Praise the Ummah: Turkey’s Government-oriented Youth Organizations in Cultivating a New Nation. Ethnopolitics, 20(4), 467–499. https://doi.org/10.1080/17449057.2019.1676536

Yackley, J.A. (2016). “Turkey closes media outlets seized from Gulen-linked owner.” Reuters. March 1, 2016. https://www.reuters.com/article/us-turkey-media-gulen-idUSKCN0W34MLa (accessed on August 15, 2024).

Yerlikaya, T. (2019). “Supervision or Censorship? Turkey’s Regulation on Netflix and Web-Based Broadcasting.” Politics Today. August 29, 2019. https://politicstoday.org/supervision-or-censorship-turkeys-regulation-on-netflix-and-web-based-broadcasting/ (accessed on August 15, 2024).

Yesil, B.; Efe, K.Z. & Khazraee, E. (2017). “Turkey’s Internet Policy After the Coup Attempt: The Emergence of a Distributed Network of Online Suppression and Surveillance.” Internet Policy Observatoryhttps://repository.upenn.edu/internetpolicyobservatory/22 (accessed on August 25, 2024).

Yildiz, A. (2003). “Politico-Religious Discourse of Political Islam in Turkey: The Parties of National Outlook.” The Muslim World (Hartford)93(2), 187–209. https://doi.org/10.1111/1478-1913.00020

Yilmaz, I. (2000). “Changing institutional Turkish-Muslim discourses on modernity, West and dialogue.” Congress of The International Association of Middle East Studies (IAMES). Freie Universitat Berlin, Germany, October 2000. 

Yilmaz, I. (2008). “Influence of Pluralism and Electoral Participation on the Transformation of Turkish Islamism,” Journal of Economic and Social Research. 10(2): 43-65.

Yilmaz, I. (2009a). “Predicaments and Prospects in Uzbek Islamism: A Critical Comparison with the Turkish Case.” USAK Yearbook of International Politics and Law, 2: 321-347.

Yilmaz, I. (2009b). “An Islamist Party, Constraints, Opportunities and Transformation to Post-Islamism: The Tajik Case.” Uluslararası Hukuk ve Politika5(18), 133–147.

Yilmaz, I. (2009c). “Socio-Economic, Political and Theological Deprivations’ Role in the Radicalization of the British Muslim Youth: The Case of Hizb ut-Tahrir.” European Journal of Economic and Political Sciences 2(1): 89-101.

Yilmaz, I. (2009d). “Was Rumi the Chief Architect of Islamism? A Deconstruction Attempt of the Current (Mis)Use of the Term ‘Islamism’.” European Journal of Economic and Political Studies. 2(2): 71-84.

Yilmaz, I. (2010a). “A Comparative Analysis of Anti-Systemic Political Islam: Hizb ut-Tahrir’s Influence in Different Political Settings (Britain, Turkey, Egypt and Uzbekistan).” In: Michelangelo Guida and Martin Klus (eds) Turkey and The European Union: Challenges and Accession Perspectives. Bruylant, 253-268.

Yilmaz, I. (2016b). “The Experience of the AKP: From the Origins to Present Times.” In: Alessandro Ferrari and James Toronto (eds) Religions and Constitutional Transitions in the Muslim Mediterranean: The Pluralistic Moment. Abingdon, Oxon; New York, NY: Routledge, 2016, 162-175.

Yilmaz, I. (2018a). “Islamic Populism and Creating Desirable Citizens in Erdogan’s New Turkey.” Mediterranean Quarterly. 29(4): 52-76. https://doi.org/10.1215/10474552-7345451

Yilmaz, I. & Bashirov, G. (2018). The AKP after 15 years: emergence of Erdoğanism in Turkey. Third World Quarterly39(9): 1812–1830. https://doi.org/10.1080/01436597.2018.1447371

Yilmaz, I. (2019c). “Potential Impact of the AKP’s Unofficial Political Islamic Law on the Radicalisation of the Turkish Muslim Youth in the West.” In: Mansouri F., Keskin Z. (eds) Contesting the Theological Foundations of Islamism and Violent Extremism. Middle East Today. Palgrave Macmillan, Cham, 2019, 163-184. https://doi.org/10.1007/978-3-030-02719-3_9

Yilmaz, I. (2020a). “Islamist Populism in Turkey, Islamist Fatwas and State Transnationalism.” In: Shahram Akbarzadeh (ed) The Routledge Handbook of Political Islam, 2nd Edition. London and New York: Routledge.

Yilmaz, I. (2021a). Creating the Desired Citizens: State, Islam and Ideology in Turkey. Cambridge and New York: Cambridge University Press. 

Yilmaz, I. (2021b). “Islamist Populism in Turkey, Islamist Fatwas and State Transnationalism.” In: Shahram Akbarzadeh (ed) The Routledge Handbook of Political Islam, 2nd Edition, 170-187. London and New York: Routledge.

Yilmaz, I. & Albayrak, I. (2022). Populist and Pro-Violence State Religion: The Diyanet’s Construction of Erdoğanist Islam in Turkey. Singapore: Palgrave Macmillan.

Yilmaz, I.; Demir, M. & Morieson, N. (2021a). “Religion in Creating Populist Appeal: Islamist Populism and Civilizationism in the Friday Sermons of Turkey’s Diyanet.” Religions. 12: 359. https://doi.org/10.3390/rel12050359/

Yilmaz, I.; Shipoli, E. & Demir, M. (2021b). “Authoritarian resilience through securitization: an Islamist populist party’s co-optation of a secularist far-right party.” Democratization28(6), 1115–1132. https://doi.org/10.1080/13510347.2021.1891412

Yilmaz, I. & Erturk, O. (2022). “Authoritarianism and necropolitical creation of martyr icons by Kemalists and Erdoganists in Turkey.” Turkish Studies23(2), 243–260. https://doi.org/10.1080/14683849.2021.1943662

Yilmaz, I. & Erturk, O. F. (2021). “Populism, violence and authoritarian stability: necropolitics in Turkey.” Third World Quarterly42(7), 1524–1543. https://doi.org/10.1080/01436597.2021.1896965

Yılmaz, Z. & Turner, B. S. (2019). “Turkey’s deepening authoritarianism and the fall of electoral democracy.” British Journal of Middle Eastern Studies46(5), 691–698. https://doi.org/10.1080/13530194.2019.1642662

Photo: Som YuZu.

EU Employment Law and the AI Act: A Policy Brief Putting the Human Back in ‘Human-Centric’ Policy

DOWNLOAD POLICY PAPER

Please cite as:

Pretorius, Christo. (2024). “EU Employment Law and the AI Act: A Policy Brief Putting the Human Back in ‘Human-Centric’ Policy.” Policy Papers. European Center for Populism Studies (ECPS). September 11, 2024.https://doi.org/10.55271/pop0002

 

This policy paper analyzes the European Union’s (EU) AI Act, aimed at regulating Artificial Intelligence (AI) through four risk classifications related to data protection, privacy, security, and fundamental rights. While the Act establishes regulatory frameworks, it neglects employment security, a critical factor behind public mistrust of AI. The paper warns that failure to address this issue could deepen socio-economic inequalities and lead to political unrest. Recommendations include promoting collective negotiation between workers and employers, advocating for legislation on redundancies linked to AI, and launching information campaigns to educate workers, thus ensuring fair working conditions and improving trust in AI technology.

By Christo Pretorius

Executive Summary

The European Union (EU) is attempting to regulate the deployment of Artificial Intelligence (AI) through the recently passed AI Act. Overall, the Act outlines four distinct classifications for AI systems, categorizing them on the risk they pose on an individual’s data protection, privacy, security, and fundamental rights. It further provides regulations and guidance to member states on each category and calls for the establishment of national and EU level regulatory bodies to enforce the Act. However, ultimately the Act overlooks the critical issue of employment security, which is the main cause behind mistrust of AI. This gap could exacerbate socio-economic inequalities and fuel political unrest in the short to long term if it is not addressed promptly.

Research indicates that AI will have a disruptive effect on employment overall as certain types of work is automated and augmented, but the effects of this will be felt most in clerical, secretarial, and para-professional roles, which poses a risk to vulnerable groups including women and those with lower educational attainment. There is a pressing need for proactive measures to mitigate the harmful effects of the technology’s implementation on workers and their families, specifically the protection against unjustified dismissal and the assurance of fair working conditions. The following recommendations are proposed to address the Act’s shortcomings:

    Collective Negotiation: Encourage cooperation between workers, employers, and worker associations to assess AI’s impact on jobs. This could lead to agreements on redeployment, education opportunities, or redundancy notices, providing workers with clearer timelines and reducing workplace disruption.

      Advocacy for New Legislation: Push for legislation that mandates notice periods for redundancies due to technological innovation, building on the EU Directive on Transparent and Predictable Working Conditions. The International Labor Organization’s 1982 Termination of Employment Convention offers a valuable template for such legislation.

        Information Campaigns: Launch campaigns to educate workers on AI systems, their potential benefits, and available upskilling opportunities. These efforts would enhance trust in AI, aligning with the AI Act’s goal of ensuring human-centric, safe, and lawful AI deployment.

         

        Context: The Problem with the AI Act

        The discussion about AI regulation in the European Union (EU) began with Ursula von der Leyen’s 2019-2024 agenda for Europe, A Union that Strives for More (2019)It stated that she would ‘put forward legislation for a coordinated European approach on the human and ethical implications of Artificial Intelligence’ (von der Leyen, 2019: 13). What followed was a call for greater focus on enabling more investment in, and the better coordination of the development and deployment of AI in the EU, alongside a call for a clear definition of what high-risk AI systems are (General Secretariat of the Council to Delegations, 2020). Jointly, the Coordinated Plan on Artificial Intelligence was published to help foster trust in AI systems yet failed to address the real consideration that such a disruptive technology would have on the world of work (European Commission, 2021). The purpose of this policy brief is to advocate for greater attention to be given to the area of employment law, so that action may be taken to ease the concerns over job loss relating to AI, and thus, foster greater trust in this new technology.

        A report from the International Labor Organization estimated that the introduction of AI systems would have an overall disruptive effect worldwide, highlight that an important share of clerical, secretarial, and para-professional jobs would be most affected (Gmyrek et al., 2023). These findings are supported by similar ones from PricewaterhouseCoopers International Limited (PWC), the US-based National Bureau of Economic Research, and the Pew Research Centre, which also found that women and individuals with lower levels of education are most vulnerable to job loss due to automation (Hawksworth, Berriman, & Goel, 2018; Acemoglu, & Restrepo, 2021; Kochhar, 2023). Although this is an evolving issue, as AI indeed has the potential to increase economic growth and improve lives, the disruptive impact of AI systems on the world of work has yet to be felt. Therefore, it is important to take the necessary steps now to mitigate the harmful affect this technology can have on workers and their families and provide a safety net to individuals during this period of transition.

        Currently the EU’s AI Act states that it is trying to ensure ‘a high level of protection of health, safety, fundamental rights as enshrined in the Charter of Fundamental Rights (2012) of the European Union (hereby referred to as the ‘Charter’), including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI systems in the Union…’. However, one of the fundamental rights not addressed was that of Articles 30 (Protection in the event of unjustified dismissal) and 31 (Fair and just working conditions) of the Charter (European Union, 2012). The focus on adopting AI throughout the EU, whilst highlighting the issues of data protection, privacy, and security of individuals in the long term, has left the area of employment concerns unaddressed. 

        The EU has highlighted that this area is one that should be legislated on in the future, stating in their 2022 report on artificial intelligence in the digital age: ‘[The European Parliament] emphasises that the use of AI in this area gives rise to a number of ethical, legal and employment related challenges… [and] stresses that AI is currently already substituting or complementing humans in a subset of tasks but that it is not yet having detectable significant aggregate labour market consequences’ (European Parliament, 2022). The report also highlights that strong links between AI and the rising socio-economic inequality has been found, and researchers warn that unregulated AI will further increase the wealth gap within society, a finding supported by other academic publications (Rotman, 2022; Bushwick, 2023). MEP Brando Benifei indicates that a proposal for a directive on AI at the workplace is something to be discussed in the future, but at present there is not much information from Benifei or anyone else on what policy discussions in this area will look like (Publyon, 2024). This uncertainty continues following von der Leyen’s announcement that more research investments will be made into AI, leaving commentators to speculate what will actually be funded (Wold, 2024). Much like the events of 2008, people that feel left behind by the increasing automatization and augmentation of the workplace could be persuaded into more extreme populist politics, which is why delaying discussions on this topic are problematic (Steiner et al., 2023). While the EU continues to refine policy, the implementation of AI into the workplace is happening now, and the situation for workers could change rapidly as new technology hits the market. Just as the EU attempted to get ahead of AI by defining it, they should attempt to get ahead of employment concerns before they evolve past being concerns alone.

        The AI Act in Brief

        To understand the shortcomings of the AI Act (2024), it is helpful to summarize the contents of the regulation first. The European Union’s AI Act came into force on the 13th of June 2024, seeking to create rules that would govern the development, employment, and use of artificial intelligence (AI) systems within the Union. The rules define four levels of risk regarding AI systems, offering only a description of what each category is, and indicate what systems are I place to regulate them within the common market:

        Unacceptable Risk: These AI are banned within the EU, except in limited circumstances.

          [Article 5.1(a/b)] Manipulative AI that can deceive, subvert, or impair autonomy, decision-making and free choice. This includes AI that may exploit disadvantaged persons whether through socio-economic vulnerabilities or take advantage of a disability. Notable exceptions to this are AI used in the context of medical treatment such as psychological treatment of a mental disease or physical rehabilitation, or within advertisement.

            [Article 5.1(c)] AI systems that provide social scoring as it may lead to discrimination and exclusion.

              [Article 5.1(d)] Risk assessment or predictive AI systems in the context of law enforcement. 

                [Article 5.1(e)] AI systems that scrape footage to expand facial recognition databases.

                  [Article 5.1(g/h)] Biometric categorization systems that are based on natural persons’ biometric data, such as an individual person’s face or fingerprint, to deduce or infer an individuals’ political opinions, trade union membership, religious or philosophical beliefs, race, sex life or sexual orientation. The notable exception to this rule is biometric categorization systems employed by law enforcement agencies for anti-terrorism and missing persons.

                  High Risk: [Preamble (paragraph 48)] These systems are defined as having a negative effect on safety or the following fundamental rights, the latter of which is in this case a person’s access to education, employment, public or private services, legal representation, and administrative or democratic processes:

                  The protection of personal data,  The rights of persons with disabilities, 
                  Freedom of expression and information, Gender equality, 
                  Freedom of assembly and of association,  Intellectual property rights, 
                  The right to non-discrimination,  Workers’ rights,
                  The right to an effective remedy and a fair trial, The right to education,
                  Consumer protection,  The right to good administration
                  The right of defense and the presumption of innocence,  

                  High risk systems are also classified as that related to critical infrastructure and biometric identification systems.

                  Limited Risk: [Preamble (paragraph 53)] AI systems that do not materially influence the outcome of decision-making, and/or augment tasks that are either automated or conducted by humans are considered to be limited risk. This category is highlighted as needing further guidelines in the future.

                  Minimal Risk: Most AI systems currently available fall under this category – they provide solutions with minimal risk, and are therefore not regulated, nor will be moving forward.

                  The regulation achieved most of the mandated aims, creating clear definitions for different levels of risk, whilst focusing on the impact different AI systems could have on individuals. 

                  Recommendations

                  The need to deal with the issue of job loss and employment law concerns are real and present and must be addressed in a timely manner during these early stages of AI implementation. Although further training opportunities is an avenue that the EU is pursuing, to contribute to the Commission’s call for the development of an ecosystem of trust by proposing a legal framework for trustworthy AI, this paper proposes three different avenues the issue of employment security could be addressed:

                    Collective Negotiation: Close cooperation between worker associations, the groups represented by them, and employers, can allow for investigation on the potential disruptive impact that AI systems can have on various professions. This will allow them to make informed decisions so that they may take steps to reach collective agreement that would allow for either redeployment of workers, advertise or make available further education opportunities, or have a guaranteed period of redundancy notice with regards to the implementation of AI systems. Similarly, management could make available to workers a clear AI implementation plan so that workplace disruption is reduced, and workers know in advance the time they have should their work be made redundant. 

                      Advocacy For the Adaption/Adopting of New Legislation: If no provisions are currently in place, countries within the EU must take active steps to create or adapt legislation that will allow workers to have a notice period that they will be made redundant due to technological innovation. The EU Directive on Transparent and Predictable Working Conditions in the European Union, 2019, set a precedent that the EU was willing to use its competency in the area of social rights to address new forms of employment to protect EU workers from unpredictable employment (Official Journal of the European Union, 2019). 

                      Given the close relationship between the International Labor Organization (ILO) and the EU, the ILO’s 1982 -Termination of Employment Convention (C158) would provide a good basis for text that could be incorporated into the EU Directive on Transparent and Predictable Working Conditions in the European Union (Pretorius, 2023). Only nine nations have ratified the convention, which contains the following article that could either be incorporated into EU law, or provide an example that can be followed:

                      ‘[Article 13.1.] When the employer contemplates terminations for reasons of an economic, technological, structural or similar nature, the employer shall:

                      (a) provide the workers’ representatives concerned in good time with relevant information including the reasons for the terminations contemplated, the number and categories of workers likely to be affected and the period over which the terminations are intended to be carried out;

                      (b) give, in accordance with national law and practice, the workers’ representatives concerned, as early as possible, an opportunity for consultation on measures to be taken to avert or to minimise the terminations and measures to mitigate the adverse effects of any terminations on the workers concerned such as finding alternative employment’ (ILO, 1982).

                        Information Campaigns: At the moment, the uncertainty surrounding AI systems in the workplace is fuelling distrust in the technology (Chakravorti, 2024). Campaigns that inform workers not only of the capabilities of implemented AI systems and how to utilize their potential, but also about opportunities to upskill, are essential moving forward. Regardless of how these campaigns are run, giving workers more accessible information would go a long way towards realizing the ‘human centric’ ideals of the AI Act – ‘so that people can trust that the technology is used in a way that is safe and compliant with the law, including the respect of fundamental rights’ (European Commission (2021).


                        (*) Christo Pretorius graduated with a MSc in International Public Policy and Diplomacy from University College Cork and was the first student to receive a postgraduate “Student of the Year” award from the Department of Government. His dissertation was published and acquired by the Bar of Ireland’s Law Library and has gone on to support Irish policy makers. Stemming from his undergraduate in Ancient and Medieval History and Culture from Trinity College Dublin, his research interests include the mechanisms for authoritarian power and control, and democratic backsliding, particularly when viewed with a historical lens.


                        References 

                        — (1982). C158 – Termination of Employment Convention. International Labour Organization. 1982 (No. 158).  https://normlex.ilo.org/dyn/normlex/en/f?p=NORMLEXPUB:12100:0::NO::P12100_ILO_CODE:C158 (accessed on August 18, 2024).

                        — (1997). Organisation of Working Time Act 1997. Irish Statute Book. https://www.irishstatutebook.ie/eli/1997/act/20/section/23/enacted/en/html#sec23 (accessed on August 17, 2024).

                        — (2012). Charter of Fundamental Rights of the European Union. 326/02. http://data.europa.eu/eli/treaty/char_2016/oj(accessed on August 18, 2024).

                        — (2020) Special meeting of the European Council (1 and 2 October 2020) – Conclusions. General Secretariat of the Council to Delegations. Brussels. https://www.consilium.europa.eu/media/45910/021020-euco-final-conclusions.pdf

                        — (2021). Coordinated Plan on Artificial Intelligence 2021 ReviewEuropean Commission. https://digital-strategy.ec.europa.eu/en/library/coordinated-plan-artificial-intelligence-2021-review (accessed on August 19, 2024).

                        — (2021). Document 52021PC0206: Proposal for a Regulation of The European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts (COM(2021) 206 final). European Commission. https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX:52021PC0206 (accessed on August 17, 2024).

                        — (2022). Report on artificial intelligence in a digital age (2020/2266(INI)). European Parliament. https://www.europarl.europa.eu/doceo/document/A-9-2022-0088_EN.html (accessed on August 17, 2024).

                        — (2024). Artificial intelligence. European Commission.  https://www.consilium.europa.eu/en/policies/artificial-intelligence/ (accessed on August 17, 2024).

                        — (2024). Corrigendum to Regulation (EU) 2024/… of The European Parliament and of the Council of laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (cor01)European Parliament.  https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138-FNL-COR01_EN.pdf

                        — (2024) The future of AI in the European Union: a chat with the AI Act’s co-rapporteur Brando Benifei. Publyon. https://publyon.com/the-future-of-ai-in-the-european-union-a-chat-with-the-ai-acts-co-rapporteur-brando-benifei/ (accessed on August 17, 2024).

                        Acemoglu D. & Restrepo P. (2021). Tasks, Automation, and the Rise in US Wage Inequality (NBER Working Paper 28920). National Bureau of Economic Research. https://doi.org/10.3386/w28920

                        Al Naam, Y.A.; Elsafi, S.; Al Jahdali, M.H.; Al Shaman, R.S.; Al-Qurouni, B.H. & Al Zahrani E.M. (2022). “The Impact of Total Automaton on the Clinical Laboratory Workforce: A Case Study.” Journal of Healthcare Leadership. 14, pp. 55-62.  https://doi.org/10.2147/JHL.S362614

                        Chakravorti, B. (2024). AI’s Trust Problem. Harvard Business Review. https://hbr.org/2024/05/ais-trust-problem (accessed on August 17, 2024).

                        Gmyrek P., Berg J., & Bescond D. (2023). Generative AI and jobs: A global analysis of potential effects on job quantity and quality (ILO Working Paper 96). International Labour Organization. https://doi.org/10.54394/FHEM8239

                        Hawksworth J.; Berriman R. & Goel S. (2018). Will robots really steal our jobs? An international analysis of the potential long term impact of automation. PWC. https://www.pwc.co.uk/economic-services/assets/international-impact-of-automation-feb-2018.pdf

                        Jensen, C.L.; Thomsen, L.K.; Zeuthen, M.; Johnsen, S.; Jashi, R.E.; Nielsen, M.F.B.; Hemstra, L.E. & Smith, J. (2024) “Biomedical laboratory scientists and technicians in digital pathology – Is there a need for professional development?” Digital Health. 10. https://doi.org/10.1177/20552076241237392

                        Kochhar, R. (2023) Which U.S. Workers Are More Exposed to AI on Their Jobs?. Pew Research Center. https://www.pewresearch.org/social-trends/wp-content/uploads/sites/3/2023/07/st_2023.07.26_ai-and-jobs.pdf

                        Pretorius, P.C. (2023) “Can Irish Industrial Relations Still be Called ‘Voluntarist’? An investigation into Irish Employment Law”. Irish Employment Law Journal, 20(1), pp. 11-21.

                        von der Leyen, U. (2019). A Union that strives for more: My agenda for Europe: political guidelines for the next European Commission 2019-2024, Luxembourg: Publication Office of the European Union.https://commission.europa.eu/system/files/2020-04/political-guidelines-next-commission_en_0.pdf

                        Wold, J.F. (2024). “Von der Leyen gives nod to €100 billion ‘CERN for AI’ proposal.” Euroactiv. July 25, 2024.https://www.euractiv.com/section/digital/news/von-der-leyen-gives-nod-to-e100-billion-cern-for-ai-proposal/ (accessed on August 10, 2024).

                        Photo: Shutterstock.

                        Digital Authoritarianism in Turkish Cyberspace: A Study of Deception and Disinformation by the AKP Regime’s AKtrolls and AKbots

                        DOWNLOAD PDF

                        Please cite as:
                        Yilmaz, Ihsan & Kenes, Bulent. (2023). “Digital Authoritarianism in Turkish Cyberspace: A Study of Deception and Disinformation by the AKP Regime’s AKtrolls and Akbots.” Populism & Politics (P&P). European Center for Populism Studies (ECPS). November 13, 2023. https://doi.org/10.55271/pp0026



                        Abstract

                        This article explores the evolving landscape of digital authoritarianism in Turkish cyberspace, focusing on the deceptive strategies employed by the AKP regime through AKtrolls, AKbots and hackers. Initially employing censorship and content filtering, the government has progressively embraced sophisticated methods, including the weaponization of legislation and regulatory bodies to curtail online freedoms. In the third generation of information controls, a sovereign national cyber-zone marked by extensive surveillance practices has emerged. Targeted persecution of critical netizens, coupled with (dis)information campaigns, shapes the digital narrative. Central to this is the extensive use of internet bots, orchestrated campaigns, and AKtrolls for political manipulation, amplifying government propaganda and suppressing dissenting voices. As Turkey navigates a complex online landscape, the study contributes insights into the multifaceted tactics of Erdogan regime’s digital authoritarianism.

                        By Ihsan Yilmaz & Bulent Kenes

                        Since the last decade, authoritarian governments have co-opted social media, compromising its potential for promoting individual liberties (Yilmaz and Yang, 2023). In recent years, President Recep Tayyip Erdogan-led Turkish government has staunchly endeavoured to control online platforms and manipulate digital spaces to consolidate power, stifle dissent, and shape public opinion. Given the large online user base and the declining influence of traditional media, the internet has become a crucial platform for opposition voices. In response, President Erdogan’s “authoritarian Islamist populist regime” (Yilmaz and Bashirov, 2018) has implemented various measures to regulate and monitor the digital space to suppress dissent (Bellut, 2021).

                        Turkey’s domestic internet policy under the Erdogan regime has shown a convergence towards information control practices observed in countries like Russia and China, despite Turkey’s nominal compliance with Euro-Atlantic norms on cyber-security (Eldem, 2020). This convergence is characterized by increasing efforts to establish “digital sovereignty” and prioritize information security, often serving as a pretext for content control and internet censorship (Eldem, 2020). The Erdogan regime takes a neo-Hobbesian view of cyberspace and seeks to exert sovereignty in this realm through various information controls (Eldem, 2020). Under the Erdogan regime, there has been an increase in the surveillance of online activities, leveraging the surveillance and repression tools provided by social media and digital technologies. Once the regime established its hegemony over the state, it expanded its surveillance tactics to govern society. 

                        In Turkey, a combination of actors including riot police, social media monitoring agents, intelligence officers, pro-government trolls, hackers, secret witnesses, informants, and collaborators work together to identify and target individuals deemed “risky.” This surveillance apparatus follows the hierarchical structure of the Turkish authoritarian state, with President Erdogan overseeing its developments (Topak, 2019).

                        The article examines the Turkish government’s pervasive use of trolls, internet bots, orchestrated campaigns, and transnational manipulations that have shaped the country’s online environment. Social media platforms, especially Twitter, are central to these manipulation efforts in Turkey. While Twitter has taken action against thousands of accounts associated with the ruling party’s youth wing, the resistance from the government highlights the significance of these online campaigns.

                        The use of fake accounts, compromised profiles, and silent bots further deepens the complexities of digital authoritarianism in Turkey. These accounts serve as vehicles for spreading disinformation, astroturfing, and manipulating social media trends. While efforts have been made to identify and remove such accounts, the adaptability of these manipulative actors poses a significant challenge. Many of these bots remain dormant for extended periods, resurfacing strategically to create and promote fake trends while evading conventional detection methods (Elmas, 2023). These software applications play a pivotal role in amplifying government propaganda, countering opposition discourse, and creating an illusion of widespread support. From replicating messages to retweeting content across hundreds of accounts, these automated bots have become instrumental in shaping online narratives and suppressing dissenting voices (Yesil et al., 2017; Eldem, 2023).

                        Digital Authoritarianism and Information Controls

                        The Erdogan regime appointed trustee to Zaman daily in Istanbul, Turkey on March 4, 2016. Photo: Shutterstock.

                        Digital authoritarianism is extensive utilization of information control measures by authoritarian regimes to shape and influence the online experiences and behaviors of the public (Howells and Henry, 2021). These regimes have adeptly adapted to the mechanisms of internet governance by exploiting the vast reach of new media platforms. They employ various forms of censorship, both overt and covert, to suppress dissent and control the dissemination of information. 

                        The literature on digital authoritarianism extensively explores how China has effectively utilized digital technology to maintain and strengthen its rule (Polyakova & Meserole, 2019; Dragu & Lupu, 2021; Sherman, 2021). While China relies on sophisticated surveillance systems and targeted persecution of individuals, the people of Russia experience the impact of digital authoritarianism through internet censorship, manipulation of information flow, the spread of disinformation, and the mobilization of trolls and automated bots (Yilmaz, 2023; Timucin, 2021).

                        In the realm of digital authoritarianism, disinformation has become a favored tool (Diamond, 2021; Tucker et al., 2017). Authoritarian regimes obscure information, engage in deception, and manipulate the context to shape public opinion (Bimber and de Zúñiga, 2020). It is important to note that digital authoritarianism is not a uniform strategy; different regimes adopt various approaches. Some directly restrict access to the internet, while others rely on heavy censorship and disinformation campaigns (Timucin, 2021; Polyakova & Meserole, 2019). 

                        The Russian model of digital authoritarianism operates with subtlety. Manipulating social media networks is easier to accomplish and maintain compared to comprehensive monitoring systems (Timucin, 2021). In these cases, the open nature of social media becomes a double-edged sword, enabling the widespread distribution of both accurate information and misinformation while amplifying voices from various ends of the political spectrum (Brown et al., 2012).

                        Digital Authoritarianism and Information Controls in Turkey

                        During the third term of the AKP (Justice and Development Party) in 2011, Turkey witnessed a shift towards increasing populist authoritarianism. Since then, the dissidents and critics of the AKP government have been framed and demonised as the enemies of the Turkish people (Yilmaz and Bashirov, 2018). 

                        Initially, the government targeted conventional media outlets, subjecting them to various tactics employed by President Erdogan (Yanardagoglu, 2018). Many critical media organizations were forced out of business, and their assets were taken over by pro-government entities. The persecutions both preceding and after the state of emergency in 2016 heightened, leading to the confiscation of media groups like the Gulen-linked Samanyolu Group, Koza Ipek Group, and Feza Publications (Timucin, 2021; BBC 2016).  These actions effectively created a clientelist relationship between the government and the media, as anti-government entities were closed and transferred or sold to pro-government supporters (Yilmaz and Bashirov, 2018).

                        The government’s dominance over traditional media outlets served as the foundation for Erdogan’s digital authoritarianism, granting the government control over the “formal” form of digital media (Timucin, 2021). Faced with limitations in conventional media, the public turned to online sites, alternative media, and social media platforms in search of reliable news and information.

                        The Gezi Park protests in 2013 marked a significant moment in Turkey’s social movements and the role of social media activism. These protests initially started as a peaceful sit-in at Gezi Park to oppose the demolition of trees for a shopping mall construction but quickly escalated into one of the largest civil unrests in Turkey’s recent history. During the early days of the protests, traditional media outlets did not provide adequate coverage, leading people to seek alternative sources of information. Social media platforms played a crucial role as a source of news, organization, and political expression, particularly among urban, tech-savvy youth (Yesil et al., 2017). The number of Twitter users in Turkey skyrocketed from an estimated 2 million to 12 million during the protests (Ozturk, 2013; Varnalı and Görgülü, 2015). Social media allowed for a more decentralized and inclusive form of communication during the protests, as it facilitated the rapid dissemination of information and bypassed traditional media gatekeepers (O’Donohue et al., 2020). 

                        The corruption scandal in December 2013 was another event where social media played a crucial role in shaping public opinion and disseminating information. Government opponents utilized social media platforms to share incriminating evidence of corruption involving President Erdogan, his party, and his cabinet. In response, the ruling AKP adopted a heavy-handed approach, detaining Twitter users and implementing bans on platforms such as Twitter and YouTube. The government positioned social media as a threat to Turkey’s national unity, state sovereignty, social cohesion, and moral values (Yesil et al., 2017; Kocer, 2015).

                        In recent years, Turkey has made efforts to assert control over social media platforms and internet service providers. In 2020, a “disinformation law” was introduced, pressuring these entities to remove “disinformation” from online platforms. Proposed changes to Article 19 in 2022 aim to enhance control over the cyber space, granting more powers to the Information and Communication Technologies Authority (BTK) to regulate the internet. These developments indicate Turkey’s increasing efforts to curb the flow of information, maintain a favorable narrative, and suppress dissenting voices, potentially impacting freedom of expression and the right to access information in the country.

                        The increasing level of digital governance in Turkey has manifested in various forms, leading to significant consequences. Content regulation has played a crucial role in the government’s efforts to control the internet. Bodies such as BTK have been granted the power to block access to online content deemed threatening. This has created a climate of increased pressure on internet service providers to comply with the state’s requests regarding content removal and access to personal user data. Failure to adhere to these obligations can result in penalties or even the revocation of licenses. There are also speculations that service providers may face bandwidth reduction and limitations on advertisements as a means of exerting further control.

                        Furthermore, cybercrime provisions intended to safeguard against hacking and online harassment have been instrumentalized by the state to gather user information for investigation, prosecution, and cooperation with “international entities.” Individuals found guilty of online offenses can be brought to court and punished under specific articles of the Turkish Penal Code.

                        In summary, the government introduced legal restrictions, content removal requests, website and social media platform shutdowns, prosecution of internet users, state surveillance, and disinformation campaigns. These measures have resulted in a significant decline in internet freedom and the rise of digital authoritarianism in Turkey between 2013 and the controversial coup attempt in July 2016.

                        Technical Instruments and Surveillance Methods to Monitor and Control Cyberspace

                        The Erdogan regime has employed various technical instruments and surveillance methods to monitor and control online activities. Reports indicate that Western companies provided spyware tools to Turkish security agencies, which have been in use since at least 2012. These tools include Deep Packet Inspection (DPI) technology, enabling surveillance of online communications, blocking of online content, and redirecting users to download spyware-infected versions of software like Skype and Avast. Additionally, the Remote-Control System and FinFisher spyware programs are used for extracting emails, files, passwords, and controlling audio and video recording systems on targeted devices (Privacy International, 2014; Yesil et al., 2017; CitizenLab, 2018; AccessNow, 2018).

                        The Erdogan regime also established a “Social Media Monitoring Unit,” a specialized police force responsible for monitoring citizens’ social media posts. There is also a group known as AKtrolls, who can act as informants and report social media posts of targeted users to security agencies, potentially leading to arrests. The AKP has also formed a team of “white hat” hackers, ostensibly for enhancing Turkey’s cyber-defense. Furthermore, civilian informants have been mobilized for internet surveillance, with ordinary citizens encouraged to spy on each other online, creating a culture of “online snitching” (Yesil et al., 2017). This pervasive surveillance approach, utilizing both software and social-user-based surveillance, creates a climate of self-censorship and vigilance among users (Saka, 2021; Morozov, 2012).

                        The National Intelligence Organization of Turkey (MİT) has been granted extended surveillance powers, both online and offline, following the post-Gezi Park protests. Law No. 6532 allowed MİT to collect private data and information about individuals without a court order from various entities. The law also granted legal immunity to MİT personnel and criminalized the publication and broadcasting of leaked intelligence information. MİT operates within the authoritarian state’s chain of command. Given MİT’s lack of autonomy, it is highly likely that the Erdogan regime exploits the agency’s expanded powers for unwarranted surveillance, political witch hunts of dissidents, journalists, and even ordinary online users, aiming to suppress any online criticism (Yeşil, 2016).

                        In October 2015, the AKP implemented the “Rewards Regulation,” which offered monetary rewards to informants who assisted security agencies in the arrest of alleged terror suspects. This measure encouraged journalists, NGOs, and citizens to monitor online communications and report dissenting individuals (Zagidullin et al., 2021).

                        The Turkish police introduced a smartphone app and a dedicated webpage that allowed citizens to report social media posts they deemed as terrorist propaganda. The main opposition party claimed that the police prepared summaries of proceedings for 17,000 social media users, and they were attempting to locate the addresses of 45,000 others (Eldem, 2023). Consequently, the state of emergency (SoE) decrees following controversial coup attempt in 2016 further tightened the government’s control over the internet. Decree 670 granted “all relevant authorities” access to all forms of information, digital or otherwise, about alleged coup suspects and their families. Decree 671 empowered the government to take any necessary measures regarding digital communications provided by ISPs, data centers, and other relevant private entities in the name of national security and public order. Finally, Decree 680 expanded police powers to investigate cybercrime by requiring ISPs to share personal information with the police without a court order (Topak, 2019; Yesil et al., 2017; Eldem, 2023).

                        Prior to Turkey’s presidential and parliamentary elections in 2023, Turkish prosecutors initiated investigations into social media users accused of spreading disinformation aiming to create fear, panic, and turmoil in society. The Ankara Chief Public Prosecutor’s Office launched an investigation into the Twitter account holders who allegedly collaborated to spread disinformation, potentially reaching around 40 million social media users (Turkish Minute, 2023).

                        The Erdogan regime has significantly expanded its online censorship toolkit through legislative amendments passed in October 2022 (HRW, 2023). As an example of the restrictions imposed, on May 14, 2023, Twitter announced that it was restricting access to certain account holders in Turkey to ensure the platform remains available to the people of Turkey.

                        AKtrolls 

                        The Erdogan regime responded to critical voices on social media during the Gezi Protests by employing political trolls. This strategy of political trolling, whether carried out by humans or algorithms, is closely associated with Russia and has been adopted by AKP’s trolls, known as AKtrolls, who exhibit similarities to Kremlin-operated networks. The deep integration of political trolling within the political system and mainstream media in Turkey has been highlighted in a study by Karatas and Saka (2017). These trolling practices are facilitated through the collaboration of political institutions and media outlets. Trolls act as precursors, disseminating propaganda and testing public opinion before mainstream political figures introduce favored populist policies and narratives.

                        The AKP’s troll army was initially established by the vice-chairman of the AKP and primarily consisted of members from AKP youth organizations. Over time, it has grown into an organization of 6,000 individuals, with 30 core members responsible for setting trending hashtags that other members then promote. Many of these trolls are graduates of pro-AKP Imam Hatip schools. It is worth noting that these trolls receive financial compensation, and there are indications that pro-AKP networks provide additional benefits to successful trolls, including entities like TRT (Turkish Radio and Television) and mobile phone operator Turkcell.

                        The first network map of AKtrolls was provided by Hafiza Kolektifi, a research collective based in Ankara, in October 2015. This map revealed the close connections among 113 Twitter accounts, including not only ordinary trolls but also politicians, advisors to President Erdogan, and pro-government journalists. The map was created based on the analysis of a popular and aggressive troll named @esatreis, who was identified as a youth member of the AKP. By monitoring the users followed by @esatreis using the Twitter Application Programming Interface (API) and conducting in-depth network analysis, two distinct groups were identified. The first group consisted of politicians, Erdogan’s advisors, and pro-government journalists, while the second group comprised anonymous trolls using pseudonyms. The study demonstrated that @esatreis acted as a bridge between the troll group and the politicians/journalists, with Mustafa Varank, an advisor to Erdogan and currently the Minister of Industry and Technology, serving as a central connection node between these two groups (Karatas & Saka, 2017).

                        It was revealed that politicians and state officials maintained their own anonymous troll accounts, in addition to their official ones. Instances have surfaced where AKP officials were caught promoting themselves through fake accounts. For instance, Minister of the Environment and Urbanization Mehmet Ozhaseki and AKP’s Bursa Mayor Recep Altepe were exposed for sharing supportive tweets mentioning themselves mistakenly from their official accounts instead of their fake ones. Another case involved AKP deputy Ahmet Hamdi Çamlı, who inadvertently opened his front camera while live-streaming parliamentary discussions with a fake account using a female name (@YelizAdeley) and a teenager’s profile photo. Within the AKP, different trolls seem to specialize in specific subjects aligned with the party’s policies and strategies. For example, accounts such as @WakeUpAttack and @UstAkilOyunlari fabricate conspiracy theories related to international affairs, while @AKKulis shares tweets from state officials and provides updates on AKP’s latest news and activities. Another troll account, @Baskentci, shared lists of journalists to be detained and media outlets to be shut down, as well as advanced information on post-coup attempt decisions (Tartanoglu, 2016).

                        AKP trolls specifically target and disrupt social media users who express opposition to the ruling party, openly identifying themselves as its supporters. While they are known within party circles, they remain anonymous to outsiders. However, some trolls, driven by rewards and recognition within their social networks, choose not to conceal their identities. In fact, Sözeri (2016) describes how certain pro-government journalists themselves act as political trolls and even lead the attacks. It is important to note that political trolls are not necessarily anonymous or isolated individuals. When aligned with a ruling party led by a president with increased powers, many trolls shed their anonymity, and some even threaten legal action when called out as trolls (Saka, 2021). Realizing that such tactics were not improving the AKP’s popularity, the party changed its approach just before the 2015 general elections by establishing the New Turkey Digital Office, which focused on more conventional forms of online propaganda (Benedictus, 2016).

                        The proliferation of digital disinformation coordinated networks of fake accounts, and the deployment of political trolls have had a significant impact on online discourse in Turkey, hindering the free expression of critical voices and fostering an environment of manipulation and propaganda. Much like the Russian “web brigades,” which consist of hundreds of thousands of paid users who post positive comments about the Putin administration, Erdogan regime also recruited an “army of trolls” to reinforce the declining hegemony of the ruling party shortly after the Gezi Park protests in 2013 (Bulut & Yoruk, 2017). Their objective is to discredit, intimidate, and suppress critical voices, often resorting to labelling journalists and celebrities as “traitors,” “terrorists,” “supporters of terrorism,” and “infidels.” Consequently, Twitter has transformed into a medium of government-led populist polarization, misinformation, and online attacks since the Gezi protests (Bulut & Yoruk, 2017). The situation worsened after the events of 2016, exposing critical voices to open cyberbullying by trolls and intensifying their persecution (Saka, 2021).

                        One prevalent form of political trolling is the deliberate disruption of influential voices on Twitter who contribute to politically critical hashtags or share news related to potential emergencies. Trolls and hackers primarily target professional journalists, opposition politicians, activists, and members of opposition parties. AKtrolls repeatedly attack and disturb these individuals using offensive and abusive language, labelling them as terrorists or traitors, intimidating them, and even threatening arrest. However, ordinary citizens who participate on Twitter with non-anonymous profiles are also vulnerable targets for AKtrolls. Being targeted by trolls often leads to individuals quitting social media, practicing self-censorship, and ultimately participating less in public debates (Karatas & Saka, 2017).

                        AKtrolls specifically target critical voices that share undesirable content or use specific hashtags. They employ tactics such as posting tweets with humiliating, intimidating, and sexually abusive insults. Doxxing, the act of revealing personal and private information about individuals, including their home addresses and phone numbers, is also a common strategy employed by AKtrolls. In some cases, AKtrolls may have connections to the security forces, particularly the police. Additionally, hacking and leaking private direct messages have been popular tactics used to discredit opposing voices on Twitter. Pro-AKP hackers affiliated with the AKtrolls have targeted numerous journalists. The initial stage often involves hacking into the journalist’s Twitter account and posting tweets that apologize to Erdogan for criticism or betrayal. Furthermore, AKtrolls frequently engage in collective reporting to Twitter in an attempt to suspend or block targeted Twitter handles (Karatas & Saka, 2017).

                        A significant event within the ruling AKP was the forced resignation of then-Prime Minister Ahmet Davutoglu by Erdogan. Prior to his resignation, an anonymous WordPress blog titled the “Pelikan Declaration” emerged, accusing Davutoglu of attempting to bypass Erdogan’s authority and making various allegations against him. This declaration was widely circulated by a group of AKtrolls who later became known as the “Pelikan Group.” It is worth noting that this group had close ties to a media conglomerate managed by the Albayrak Family, particularly Berat Albayrak, Erdogan’s son-in-law and Turkey’s former Minister of Economy, as well as his elder brother and media mogul Serhat Albayrak (Saka, 2021).

                        AKbots

                        The Erdogan regime extensively utilizes internet bots, which are software applications running automated tasks over the Internet, to support paid AKtrolls (Yesil et al., 2017). Researchers have demonstrated that during the aftermath of the Ankara bombings in October 2015, the heavy use of automated bots played a crucial role in countering anti-AKP discourse. Twitter even took action to ban a bot-powered hashtag that praised President Erdogan, leading Turkish ministers to claim a global conspiracy against Erdogan (Hurriyet Daily News, 2016; Lapowsky, 2015).

                        The use of automated bots differs from having multiple accounts in terms of scale. The presence of bots becomes noticeable when a message is replicated or retweeted to more than a few hundred other accounts. It is worth noting that as of November 2016, Istanbul and Ankara ranked as the top two cities for AKbot usage, according to the major internet security company Norton (Paganini, 2016; Yesil et al., 2017; Eldem, 2020).

                        Furthermore, DFRLab (2018) has revealed that many tactics, including doxing (revealing personal information), are employed through cross-platform coordination. It is important to recognize that in the Turkish context, the influence of AKtrolls extends beyond internet platforms and involves close cooperation with conventional media outlets under Erdogan’s control (Saka, 2021). In October 2019, DFRLab identified a network of inauthentic accounts that aimed to mobilize domestic support for the Turkish government’s fight against the Kurdish People’s Protection Units (YPG) in Syria (Grossman et al., 2020). This network involved fabricated personalities created on the same day with similar usernames, several pro-AKP retweet rings, and centrally managed compromised accounts that were utilized for AKP propaganda. The tweets originating from these accounts criticized the pro-Kurdish HDP, accusing it of terrorism and employing social media manipulation. The tweets also targeted the main opposition party, CHP. 

                        Additionally, the accounts promoted the 2017 Turkish constitutional referendum, which consolidated power in Erdogan, and sought to increase domestic support for Turkish intervention in Syria. Some English-language tweets attempted to bolster the international legitimacy of Turkey’s offensive in October 2019, praising Turkey for accepting Syrian refugees and criticizing the refugee policies of several Western nations. The dataset of accounts included individuals who appeared to be leaders of local AKP branches, members of digital marketing firms, sports fans, as well as clearly fabricated personalities or members of retweet rings (Grossman et al., 2020).

                        In 2019, a significant proportion of the daily top ten Twitter trends in Turkey were generated by fake accounts or bots, averaging 26.7 percent. The impact was even higher for the top five Twitter trends, reaching 47.5 percent (Elmas, 2023). State-organized hate speech, trolls, and online harassment often go unchecked (Briar, 2020).

                        In 2020, Twitter took action to remove over 7,000 accounts associated with the youth wing of the ruling AKP. These accounts were responsible for generating more than 37 million tweets, which aimed to create a false perception of grassroots support for government policies, promote AKP perspectives, and criticize its opponents. Many of these accounts were found to be fake, while others belonged to real individuals whose accounts had been compromised and controlled by AKP supporters. Fahrettin Altun, Erdogan’s communications director, issued threats against Twitter for removing this large network of government-aligned fake and compromised accounts (Twitter Safety, 2020; HRW, 2023a).

                        A study published in the ACM Web Conference 2023 identified Turkey as one of the most active countries for bot networks on Twitter. These networks were found to be pushing political slogans as part of a manipulation campaign leading up to the 2023 elections. Alongside the reactivated bots, the main opposition presidential candidate, Kilicdaroglu, warned about the circulation of algorithmically fabricated audio or video clips aimed at discrediting him (Karatas & Saka, 2017).

                        Bots on social media engage in malicious activities such as amplifying harmful narratives, spreading disinformation, and astroturfing. Elmas (2023) detected over 212,000 such bots on Twitter targeting Turkish trends, referring to them as “astrobots.” Twitter has purged these bots en masse six times since June 2018. According to Elmas’ study, the percentage of fake trends on Twitter varied over time. Between January 2021 and November 2021, the average daily percentage of fake trends was 30 percent. After Twitter purged bots around November 2021, the share of fake trends decreased to 10 percent in March 2022. However, it started to rise again and reached 20 percent by November 2022. As of April 7, 2023, just before the 2023 Turkish election, the attacks continued, and the percentage of fake trends fluctuated between 35 percent and 9 percent (on weekends). Notably, many bots in the dataset were silent, meaning they did not actively post tweets. Instead, they were used to create fake trends by posting tweets promoting a trend and immediately deleting them. This silent behaviour makes it challenging for bot detection methods to identify them, with 87 percent of the bot accounts remaining silent for at least one month (Elmas, 2023). 

                        In May 2023, during the election month, Turkey saw 145 million tweets shared from 12,479,000 accounts, with 23 percent of these identified as bot accounts by the Turkish General Directorate of Security. An examination of the top 10 trending hashtags revealed that 52 percent of accounts using these hashtags were bot accounts (Bulur, 2022). It was also reported that approximately 12,000 Russian- and Hungarian-speaking Twitter accounts had been reactivated, along with reactivated Turkish-speaking accounts, accompanied by numerous bot followers to amplify their posts. Although only 27 percent of the Turkish population is believed to use Twitter, the impact is significant, with 20 percent of the trending topics on Turkish Twitter in 2023 being manipulated and not reflective of public discourse. A dataset covering the period from 2013 to 2023 indicated that 20 to 50 percent of trending topics in Turkey were fake and primarily propelled by bots (Soylu, 2023, Unker, 2023). 

                        Hackers

                        Photo: Shutterstock.

                        The Erdogan regime’s extensive investments in domestic and global information operations, include the recruitment of hackers worldwide. The regime has also established a “white hat” hacker team ostensibly for enhancing Turkey’s cyber-defense (Yeşil et al., 2017). However, there are suspicions that this team has been utilized offensively to silence government critics (Cimpanu, 2016).

                        The private Cihan News Agency, known for its accurate and swift reporting of Turkish election results since the 1990s, faced a significant cyberattack for the first time during the local elections on March 30, 2014, raising concerns about election security (Haber Turk, 2014). Opposition newspapers, including Zaman, Taraf, and Cumhuriyet, which faced similar cyberattacks, pointed to Ankara as the source of these attacks, raising discussions about the state and service providers’ negligence and potential involvement (Akyildiz, 2014).

                        A similar situation recurred during the 2015 general elections when concerns about the Erdogan regime manipulating election results intensified. On the evening of June 7, 2015, during the ballot counting, a cyberattack targeted the Cihan News Agency, disrupting its services. Zaman newspaper reported that the attack was linked to a special team established within TÜBİTAK, with connections to foreign countries established through TÜBİTAK computers and botnet networks used to direct the attacks and obscure the source (Internet Haber, 2015).

                        Starting from 2009, Erdoganist hackers also targeted numbers of western countries whose politicians expressed anti-Islamic views or criticized Erdogan regime in Turkey (Souli, 2018; Hern, 2017; Space Watch, 2018; Goud, 2018). In a striking illustration of how cyber activities often align with geopolitics, the Turkish hacktivist group Ayyildiz Tim faced accusations of hacking and taking control of the social media accounts of prominent US journalists in 2018. Their aim was to disseminate messages in support of President Erdogan. These cyber incidents unfolded amidst a period of notably strained US-Turkish ties. Additionally, Turkey grappled with an economic crisis, widely attributed to Erdogan’s ill-advised economic policies, although he consistently laid the blame on the US. The US-based cybersecurity firm CrowdStrike exposed the activities of Ayyildiz Tim, a group active since 2002. There is evidence indicating potential ties between Ayyildiz Tim and security forces loyal to Erdogan (Space Watch, 2018; Goud, 2018).

                        In January 2023, a Turkish hacker collective known as “Türk hackteam” initiated a call for cyberattacks targeting Swedish authorities and banks, coupled with a warning, stating, “If you desecrate the Quran one more time, we will begin spreading sensitive personal data of Swedes” (Hull, 2023). Several prominent Swedish websites reportedly suffered temporary outages due to DDoS attacks, with responsibility for these attacks claimed by the Turkish hacker group Türk Hack Team. Identifying themselves as nationalists, they alleged their lack of affiliation with Erdogan, who had previously stated that Sweden should not expect Turkish NATO support after the Quran incident (Skold, 2023).

                        Meanwhile, in the lead-up to the 2023 presidential elections, Turkey’s primary opposition leader and presidential candidate, Kilicdaroglu, made allegations that the ruling AKP had engaged foreign hackers to orchestrate an online campaign against him, employing fabricated videos and images (Turkish Minute, 2023a).

                        Demonstrating the Erdogan regime’s keen interest in hacking endeavors, an annual event known as “Hack Istanbul” has been hosted by Turkey since 2018. This unique competition challenges hackers worldwide with sophisticated real-world cyberattack scenarios crafted under the guidance of leading global experts (Hurriyet Daily News, 2021). The Turkish Presidency’s Digital Transformation Office has been responsible for organizing these hacking competitions, which offer substantial financial rewards. Furthermore, the regime has initiated Cyber Intelligence Contests as part of its training campaigns, effectively expanding the pool of individuals with cybersecurity skills (Cyber Intelligence Contest, 2021). 

                        Conclusion

                        The evolution of information controls in Turkey began with first-generation techniques, such as censorship and content filtering, aimed at restricting access to specific websites and online platforms. However, as technology advanced, the government adopted more sophisticated methods. One prevalent tool has been the instrumentalization of legislation, through which laws have been enacted to curtail online freedoms and enable state surveillance. Additionally, regulatory bodies, originally intended to ensure fair practices, have been weaponized to enforce censorship and impose restrictions, eroding the independence of online platforms. Furthermore, the Turkish government has resorted to tactics like shutdowns, throttling, and content removal requests to suppress dissenting voices and control the flow of information. 

                        In the third generation of information controls, Turkey has focused on establishing a sovereign national cyber-zone characterized by extensive surveillance practices. Advanced technologies have been employed to monitor online activities, creating a pervasive atmosphere of surveillance and curtailing privacy rights. Critical netizens, including activists, journalists, and dissidents, have faced targeted persecution, enduring harassment, intimidation, and legal prosecution to silence opposition and stifle open discourse. Moreover, regime-sponsored (dis)information campaigns have played a significant role in shaping the digital narrative. 

                        Central to the concept of digital authoritarianism in Turkey is the extensive deployment of internet bots and automated tools. The use of internet bots, fake accounts, and orchestrated campaigns for political manipulation is indeed pervasive in Turkey, particularly in shaping public opinion, supporting government policies, and undermining political opponents. Numerous studies have revealed the extensive deployment of automated bots by the Erdogan regime and its supporters to amplify government propaganda, counter anti-government narratives, and create a false perception of grassroots support. 

                        The deployment of individuals known as “AKtrolls” has been used to disseminate pro-government propaganda and attack dissenting voices. Automated bots have been utilized to amplify certain narratives while suppressing opposing viewpoints, distorting the digital discourse, and undermining the integrity of online discussions.

                        As the Turkish political landscape evolves, the role of social media in shaping public opinion and electoral outcomes remains a critical concern. The elections intensified the battle for online influence, with the government attempting to purchase accounts and engage with dark web groups. The landscape of online manipulation in Turkey is further complicated by the prevalence of fake accounts, compromised profiles, and silent bots that intermittently generate and promote false trends. Silent accounts, which quickly delete tweets, evade detection, making it challenging to identify them. 

                        Additionally, the manipulation of social media in Turkey has a transnational dimension, with instances of foreign interference and coordinated campaigns coming to light. The use of extensive networks of fake or compromised accounts to amplify certain political views or spread false information on social media has become increasingly prevalent, particularly during politically sensitive periods like elections. Many of these coordinated networks are dedicated to promoting pro-Erdogan perspectives, and the regime occasionally presents their artificial presence as evidence of grassroots support for its policies.


                        Funding: This research was funded by Gerda Henkel Foundation, AZ 01/TG/21, Emerging Digital Technologies and the Future of Democracy in the Muslim World.


                        References

                        — (2014). “The right to privacy in Turkey.” Privacy International. https://privacyinternational.org/sites/default/files/2017–12/UPR_Turkey_0.pdf

                        — (2014). “Cihan Haber Ajansı’ndan ‘siber saldırı’ açıklaması.” Haber Turk. March 30, 2014. https://www.haberturk.com/medya/haber/934450-cihan-haber-ajansindan-siber-saldiri-aciklamasi (accessed on November 2, 2023).

                        — (2015). “Cihan haber ajansını kim hacledi, olay iddia.” Internet Haber. June 7, 2015. https://www.internethaber.com/cihan-haber-ajansini-kim-hacledi-olay-iddia-793015h.htm (accessed on November 2, 2023).

                        — (2016). “Zaman newspaper: Seized Turkish daily ‘now pro-government’.” BBC. March 6, 2016. https://www.bbc.com/news/world-europe-35739547 (accessed on May 11, 2023).

                        — (2016). “Turkish Ministers Accuse Twitter of Plotting against Erdoğan,” Hurriyet Daily News. March 30, 2016. http://www.hurriyetdailynews.com/turkish-ministers-accuse-twitter-of-plotting-against-erdogan–97106 (accessed on May 11, 2023).

                        — (2018). “Turkish Hacktivist Group Ayyildiz Tim Hijack U.S. Journalist Social Media Account In Support Of Erdogan.” Space Watchhttps://spacewatch.global/2018/08/turkish-hacktivist-group-ayyildiz-tim-hijack-u-s-journalist-social-media-account-in-support-of-erdogan/ (accessed on November 1, 2023).

                        — (2018). “Bad Traffic: Sandvine’s Packet Logic devices used to deploy government spyware in Turkey and redirect Egyptian users to affiliate ads?” CitizenLab. March 9, 2018. https://citizenlab.ca/2018/03/bad-traffic-sandvines-packetlogic-devices-deploy-government-spyware-turkey-syria/ (accessed on May 18, 2023).

                        Bellut, Daniel. (2021). “Turkish government increases pressure on social media.” DW. September 9, 2021. https://www.dw.com/en/turkish-government-increases-pressure-on-social-media/a-59134848 (accessed on May 15, 2023).

                        — (2018). “Alert: Finfisher changes tactics to hoot critics.” AccessNow.

                        — (2018). “#TrollTracker: Journalist Doxxed by American Far Right,” DFRLab, Medium (blog), June 17, 2018. https://medium.com/dfrlab/trolltracker-journalist-doxxed-by-american-far-right-7881f9c20a16 (accessed on May 11, 2023).

                        — (2020). “Disclosing networks of state-linked information operations we’ve removed,” Twitter Safety, June 12, 2020. https://blog.twitter.com/en_us/topics/company/2020/information-operations-june-2020 (accessed on May 13, 2023).

                        — (2021). “Registrations open for Hack Istanbul 2021 contest.” Hurriyet Daily News. May 6, 2021. https://www.hurriyetdailynews.com/registrations-open-for-hack-istanbul-2021-contest-164490 (accessed on November 2, 2023).

                        — (2021). “Cyber Intelligence Contest.” The Digital Transformation of the Presidency.https://cbddo.gov.tr/projects/4726/cyberintelligencecontest/ (accessed on November 2, 2023).

                        — (2023). “13 detained ahead of elections over defamation alleged by former presidential candidate,” Turkish Minute, May 13, 2023. https://www.turkishminute.com/2023/05/13/13-detained-ahead-of-elections-over-defamation-alleged-by-former-presidential-candidate/ (accessed on May 13, 2023).

                        — (2023a). “Kılıçdaroğlu says Erdoğan gov’t hired foreign hackers for online campaign against him.” Turkish Minute. May 5, 2023. https://www.turkishminute.com/2023/05/05/kilicdaroglu-says-erdogan-govt-hired-foreign-hackers-for-online-campaign-against-him/ (accessed on November 1, 2023).

                        — (2023). “Turkey’s Control of the Internet Threatens Election.” Human Rights Watch (HRW). May 10, 2023. https://www.hrw.org/news/2023/05/10/turkeys-control-internet-threatens-election (accessed on May 12, 2023).

                        — (2023a). “Questions and Answers: Turkey’s Control of the Internet and the Upcoming Election.” Human Rights Watch (HRW). May 10, 2023. https://www.hrw.org/news/2023/05/10/questions-and-answers-turkeys-control-internet-and-upcoming-election#_Toc134065370 (accessed on October 29, 2023).

                        Akyildiz, Emir. (2014). “TİB ve Telekom saldırıyı seyretti.” Haber Vesaire. March 31, 2014. https://www.habervesaire.com/039-tib-ve-telekom-saldiriyi-seyretti-039/ (accessed on November 2, 2023).

                        Benedictus, Leo. (2016). “Invasion of the troll armies: from Russian Trump supporters to Turkish state stooges,” The Guardian, November 6, 2016. https://www.theguardian.com/media/2016/nov/06/troll-armies-social-media-trump-russian(accessed on May 13, 2023).

                        Bimber, Bruce and Homero Gil de Zúñiga. (2020). “The Unedited Public Sphere.” New Media and Society 22: 700–15.

                        Briar, Narîn. (2020). “Negligent Social Media Platforms Breeding Grounds for Turkish Nationalism, Hate Speech,” The Armenian Weekly, October 29, 2020. https://armenianweekly.com/2020/10/29/negligent-social-media-platforms-breeding-grounds-for-turkish-nationalism-hate-speech/ (accessed on May 18, 2023).

                        Brown, H.; E. Guskin & A. Mitchell. (2012). “The role of social media in the Arab uprisings.” Pew Research Center28.

                        Bulur, Sertac. (2022). “EGM: Mayıs’ta 145 milyon tweet paylaşılan hesapların yüzde 23’ü bot hesap.” Anadolu Ajansı. June 7, 2022. https://www.aa.com.tr/tr/gundem/egm-mayista-145-milyon-tweet-paylasilan-hesaplarin-yuzde-23u-bot-hesap/2607433 (accessed on October 23, 2023). 

                        Bulut E. & E. Yoruk. (2017). “Digital populism: Trolls and political polarization of Twitter in Turkey,” International Journal of Communication. 11: 4093–4117.

                        Cimpanu, Catalin. (2016). “Turkey Wants to Build Army of Hackers.” Bleeping Computer. December 30, 2016. https://www.bleepingcomputer.com/news/government/turkey-wants-to-build-army-of-hackers/ (accessed on November 2, 2023).

                        Diamond, Larry. (2021). “Rebooting Democracy.” Journal of Democracy. 32: 179–83;

                        Dragu, Tiberiu & Yonatan Lupu. (2021). “Digital authoritarianism and the future of human rights.” International Organization. 75(4), 991-1017;

                        Ebert H. & T. Maurer. (2013). “Contested cyberspace and rising powers,” Third World Quarterly. 34(6), 1054–1074. doi:10.1080/01436597.2013.802502.

                        Eldem, Tuba. (2020). “The Governance of Turkey’s Cyberspace: Between Cyber Security and Information Security.” International Journal of Public Administration. Vol. 43, No. 5, 452–465 https://doi.org/10.1080/01900692.2019.1680689

                        Elmas, Tugrulcan. (2023). “Analyzing Activity and Suspension Patterns of Twitter Bots Attacking Turkish Twitter Trends by a Longitudinal Dataset.” ArXiv. April 16, 2023. https://arxiv.org/pdf/2304.07907.pdf

                        Goud, Naveen. (2018). “Turkey hackers sneak into social media accounts of US Journalists.” Cyber Security Insiders. https://www.cybersecurity-insiders.com/turkey-hackers-sneak-into-social-media-accounts-of-us-journalists/ (accessed on November 1, 2023).

                        Grossman, Shelby; Fazil Alp Akis, Ayça Alemdaroğlu, Josh A. Goldstein & Katie Jonsson. (2020). “Political Retweet Rings and Compromised Accounts: A Twitter Influence Operation Linked to the Youth Wing of Turkey’s Ruling Party,” Stanford Internet Observatory, June 11, 2020.

                        Hern, Alex. (2017). “Twitter accounts tweet swastikas and pro-Erdoğan support in massive hack.” The Guardian. March 15, 2017. https://www.theguardian.com/technology/2017/mar/15/twitter-turkey-accounts-hack-tweet-swastikas-pro-erdogan (accessed on November 1, 2023).

                        Howells, Laura and Laura A. Henry. (2021). “Varieties of Digital Authoritarianism: Analyzing Russia’s Approach to Internet.” Governance Communist and Post-Communist Studies. 54: 1–27.

                        Hull, Justina. (2023). “Turkiskt hackerforum manar till attacker mot svenska banker.” SVT. January 27, 2023. https://www.svt.se/nyheter/inrikes/turkiskt-hackerforum-manar-till-attacker-mot-sverige (accessed on November 1, 2023).

                        Karataş, Duygu & Erkan Saka. (2017). “Online political trolling in the context of post-Gezi social media in Turkey.” International Journal of Digital Television. (2017). Volume 8 Number 3. doi: 10.1386/jdtv.8.3.383_1 

                        Kocer, Suncem. (2015). “From the ‘Worst Menace to Societies’ to the ‘Robot Lobby’: A Semantic Views of Turkish Political Rhetoric on Social Media.” In: Lemi Baruh & Banu Baybars Hawks (Eds.). New Media Politics: Rethinking Activism and National Security in Cyberspace Account. Newcastle upon Tyne: Cambridge Scholars Publishing.

                        Lapowsky, Issie. (2015). “Why Twitter Is Finally Taking a Stand against Trolls,” Wired, April 21, 2015. https://www.wired.com/2015/04/twitter-abuse/ (accessed on May 11, 2023).

                        Morozov, Evgeny. (2012). The Net Delusion: The Dark Side of Internet Freedom. New York: Public Affairs.

                        O’Donohue, Andrew; Max Hoffman & Alan Makovsky. (2020). “Turkey’s Changing Media Landscape.” CAP. June 10, 2020. https://www.americanprogress.org/article/turkeys-changing-media-landscape/ (accessed on May 18, 2023).

                        Ozturk, Ozgur. (2013) “Gezi olaylarının Twitter kullanıcı sayısını arttırdı.” Hürriyet, December 8, 2013. http://www.hurriyet.com.tr/gezi-olaylarinin-twitter-kullanici-sayisini-arttirdi-25306778 (accessed on May 14, 2023).

                        Paganini, Pierluigi. (2016). “Which Are Principal Cities Hostages of Malicious Botnets?” Security Affairs. October 6, 2016. https://securityaffairs.co/wordpress/51968/reports/botnets-geography.html (accessed on May 11, 2023);

                        Polyakova, Anna & Chris Meserole. (2019). “Exporting digital authoritarianism: The Russian and Chinese models,” Policy Brief, Democracy and Disorder Series. Washington, DC: Brookings. 1-22.

                        Saka, Erkan. (2021). “Networks of Political Trolling in Turkey after the Consolidation of Power Under the Presidency” (pp. 240-255). In: Digital Hate. The Global Conjuncture of Extreme Speech. Indiana University Press. https://iupress.org/9780253059253/digital-hate/

                        Sherman, Justin. (2021). “Digital Authoritarianism and Implications for US National Security,” The Cyber Defense Review. 6:1. 107- 118.

                        Skold, Henrik. (2023). “Turkiska hackergruppens nya hot: Då släpper vi känslig data om svenskar.” SVT. Februari 1, 2023. https://www.svt.se/nyheter/utrikes/efter-koran-branningen-och-nato-turkiska-hackergruppens-nya-hot-da-slapper-vi-kanslig-data-om-svenskar (accessed on November 1, 2023).

                        Souli, Sarah. (2018). “Turkey’s band of pro-Erdoğan hackers keep trolling Europe.” Vice. March 17, 2018. https://www.vice.com/en/article/wj7enx/turkeys-band-of-pro-erdogan-hackers-keep-trolling-europe (accessed on November 1, 2023).

                        Soylu, Ragip. (2023). “Turkey elections: Thousands of Russian Twitter accounts reactivated in Turkish.” Middle East Eye. April 18, 2023. https://www.middleeasteye.net/news/turkey-elections-thousands-russian-speaking-accounts-activated-twitter (accessed on October 17, 2023).

                        Sozeri, Ceren. (2016).“Trol gazeteciliği.” Evrensel, September 18, 2016. https://www.evrensel.net/yazi/77506/trol-gazeteciligi (accessed on May 11, 2023).

                        Tartanoglu, Sinan. (2016). “Muhtar İstihbarat Teşkilatı [Muhtar Intelligence Agency],” Cumhuriyet. December 23, 2016. http://www.cumhuriyet.com.tr/haber/siyaset/649959/Muhtar_istihbarat_Teskilati.html (accessed on May 18, 2023).

                        Timucin, Fatma. (2021). 8-Bit Iron Fist: Digital Authoritarianism in Competitive Authoritarian Regimes: The Cases of Turkey and Hungary. (Master’s thesis, Sabanci University, Istanbul, Turkey, 2021). https://research.sabanciuniv.edu/42417/ (accessed on May 11, 2023).

                        Topak, Ozgün E. (2019). “The authoritarian surveillant assemblage: Authoritarian state surveillance in Turkey.” Security Dialogue. 50: 454–72.

                        Tucker, Joshua A.; Yannis Theocharis, Margaret E. Roberts, and Pablo Barberá. (2017). “From Liberation to Turmoil: Social Media and Democracy.” Journal of Democracy. 28: 46–59.

                        Unker, Pelin. (2023). “Twitter’da beş gündem etiketinden biri sahte.” DW Turkce. May 12, 2023. https://www.dwturkce.com/tr/twitterda-seçim-manipülasyonu-beş-gündem-etiketinden-biri-sahte/a-65345776 (accessed on October 17, 2023).

                        Varnalı K. and V. Görgülü. (2015). “A social influence perspective on expressive political participation in Twitter: The case of #occupyGezi.” Information, Communication & Society. 18(1): 1–16.

                        Yanardagoglu, Eylem. (2018). “Communication as Political Action: Gezi Park and Online Content Producer.” In: Alternative Media in Contemporary Turkey: Sustainability, Activism, and Resistance. Murat Akser & Victoria McCollum (eds.). Rowman & Littlefield Publishers.

                        Yeşil, Bilge. (2016). Media in New Turkey: The Origins of an Authoritarian Neoliberal State. Champaign: University of Illinois Press.

                        Yesil, Bilge; Efe Kerem Sözeri & Emad Khazraee. (2017). “Turkey’s Internet Policy After the Coup Attempt: The Emergence of a Distributed Network of Online Suppression and Surveillance.” Internet Policy Observatory. February 28, 2017. https://repository.upenn.edu/internetpolicyobservatory/22 (accessed on May 11, 2023).

                        Zagidullin, Marat; Aziz, Nergis and Kozhakhmet, Sanat. (2021). “Government policies and attitudes to social media use among users in Turkey: The role of awareness of policies, political involvement, online trust, and party identification.” Technology in Societyhttps://doi.org/10.1016/j.techsoc.2021.101708

                        Yilmaz, Ihsan. (2023). “Digital Authoritarianism and Religion in Democratic Polities of the Global South.” In: Ihsan Yilmaz (eds.), Digital Authoritarianism and Its Religious Legitimization: The Cases of Turkey, Indonesia, Malaysia, Pakistan, and India. Singapore: Palgrave Macmillan.

                        Yilmaz, Ihsan & Fan Yang. (2023). “Digital Authoritarianism and Religious Populism in Turkey.” In: Ihsan Yilmaz (eds.) Digital Authoritarianism and Its Religious Legitimization: The Cases of Turkey, Indonesia, Malaysia, Pakistan, and India. Singapore: Palgrave Macmillan.

                        Yilmaz, Ihsan and Galib Bashirov. (2018). “The AKP after 15 years: Emergence of Erdoganism in Turkey.” Third World Quarterly 39: 1812–30.

                        Illustration: Shutterstock / Skorzewiak.

                        Strategic Digital Information Operations (SDIOs)

                        DOWNLOAD PDF

                        Please cite as:

                        Yilmaz, Ihsan; Akbarzadeh, Shahram & Bashirov, Galib. (2023). “Strategic Digital Information Operations (SDIOs).” Populism & Politics (P&P). European Center for Populism Studies (ECPS). September 10, 2023. https://doi.org/10.55271/pp0024a

                         

                        Abstract

                        In this paper, we introduce the concept of “Strategic Digital Information Operations” (SDIOs), discuss the tactics and practices of the SDIOs, explain the main political goals of state and non-state actors in engaging with SDIOs at home and abroad, and suggest avenues for new research. We argue that the concept of the SDIOs presents a useful framework to discuss all forms of digital manipulation at both domestic and international levels organized by either state or non-state actors. While the literature has examined the military-political impacts of the SDIOs, we still don’t know much about societal issues that the SDIOs influence such as emotive political mobilization, intergroup relations, social cohesion, trust, and emotional resonance among target audiences. 

                         

                        By Ihsan Yilmaz, Shahram Akbarzadeh* and Galib Bashirov**

                        Introduction

                        In recent years, the convergence of the digital realm and political sphere has created a dynamic environment where a wide range of state and non-state actors try to leverage digital platforms to pursue their political goals. This trend includes diverse cases, spanning from the continual targeting of autonomous media establishments in nations like Egypt and Turkey to the deliberate manipulation of electoral processes in democratic countries such as the United States (US) and the United Kingdom (UK), while also extending its reach to include extremist groups such as ISIS who use digital platforms for their propaganda endeavours (see Ingram, 2015; Theohary, 2011). These “Strategic Digital Information Operations (SDIOs),” as we call them here, refer to efforts by state and non-state actors to manipulate public opinion as well as individual and collective emotions by using digital technologies to change how people relate and respond to events in the world. As such, SDIOs involve deliberate alteration of the information environment by social and political actors to serve their interests.

                        We use this term – SDIOs – because it combines several facets of digital manipulation at both national and international levels. “Information Operations” is a term social media companies like Facebook have adopted to describe organized communicative activities that attempt to circulate problematically inaccurate or deceptive information on their platforms. These activities are strategic because rather than being purely communicative, they are driven by the political objectives of state and non-state actors (see Starbird et al., 2019; Hatch, 2019). We add the concept ‘digital’ to emphasize the distinction between the old ways of information operations and the new ones that operate almost specifically in the digital realm and use much more sophisticated tools such as artificial intelligence (AI), machine learning, and algorithmic models to disseminate information. Of course, some aspects of digital information operations have been carried over from the non-digital environments that have been mastered over the past century. Nonetheless, the affordances of the digital environment have provided not only radically new and sophisticated tools but also an opportunity for much wider dissemination and reach for strategic information operations. 

                        The SDIOs involve various tactics used by political groups who try to shape the online environment in their favour. Their goal is to control the flow of information, where politics and social actions meet. We note that these tactics can cross borders between countries: these operations don’t just target people within a country; they also aim to reach people in other nations. In this article, we briefly discuss the tactics and practices of the SDIOs, explain the main political goals of state and non-state actors in engaging with SDIOs at home and abroad, and present venues for new research.  

                        Tactics and Practices of SDIOs

                        As researchers started to examine the many ways in which state actors have tried to manipulate domestic and foreign public opinion in their favour, disinformation has become the main focus of their analysis with an emphasis on spreading fake news, conspiracy theories, and outright lies. Various forms of disinformation have been used in order to create doubt and confusion among the consumers of malign content. Spreading conspiracy theories makes people doubt the truth, which weakens trust in social and political institutions. Moreover, sharing fake news or other fabricated stories weaves a web of lies that shapes what people think. While the latter has certainly been effective in manipulating public opinion, observers have noted recently a shift in emphasis from disinformation to more sophisticated and less discernable means of manipulation. 

                        The aforementioned shift has taken place due to the growing awareness of the fake news and lies in digital environments on the part of both users and digital platforms. As platforms such as Twitter and Facebook have increased their clampdown on such content and as users have become more capable in spotting them, state and non-state actors have moved to more sophisticated means of digital manipulation where content is carefully designed to change how people see things. For example, instead of outright lies or fake news, strategic actors have started to spread half-truths that create a specific version of events by conveying only part of the truth (Iwuoha, 2021). Moreover, these actors have made massive investments on smart public relations messages and clever advertisements to prop up their messages. An important tactical goal has become not simply to deceive the audience but more so to ‘flood’ the information space with not just false, but also distracting, irrelevant, and even worthless pieces of information with the help of trolls and bots, hired social media consultants and influencers, as well as genuine followers and believers (Mir et al., 2022). 

                        For example, observers noted how a prominent strategy of the Chinese domestic propaganda is to ‘drown out’ dissident voices through incessant propagation of the government messaging, a campaign called ‘positive energy’ (Chen et al., 2021). The Orwellian campaign involved not only the use of a massive influencer and troll army to promote government messaging but also the forceful testimony of the Uyghur people. In one instance for example, seven people of Uighur descent were brought to a press conference to share their stories of “positive energy” and made-up hype against China to disprove allegations of mistreatment by the Chinese government (Mason, 2022). As such, SDIOs encompass all these tactics and practices rather than merely focusing on means of disinformation that have so far dominated the research into digital manipulation. It also shows the ability of SDIOs to adapt and change over time based on the operational context. While disinformation through direct messages remains a consistent approach, actors increasingly move towards using subtler tactics to create distractions and cause confusion among their audience, which weakens the basis of well-informed political discussions. For example, the Egyptian government has flooded the information space with the news of the ‘electricity surplus’ and the future of Egypt as ‘an electricity carrier for Europe’ amidst an ongoing economic crisis in the country that has left millions of Egyptians without access to reliable electricity (Dawoud, 2023). 

                        At the heart of discussions about strategic digital information operations lies the creation of narratives carefully designed to connect with their intended audiences. These narratives aren’t random; instead, they’re tailored to match how the recipients think. The interaction between these narratives and their audiences involves psychology, culture, and emotions. How the audience reacts depends not only on how convincing the content is, but also on their existing beliefs, biases, and cultural contexts (Bakir and McStay, 2018). While some people might approach these narratives with doubt, others could be drawn into self-reinforcing cycles, giving in to confirmation bias and manipulation. This back-and-forth underlines the close link between creators and consumers of strategic narratives in the digital era.

                        Among the many narrative tropes that SDIOs use, we want to note the increasing role ascribed to historical and religious notions to influence public opinion and political discussions. SDIOs mix past grievances and religious beliefs to make their stories more impactful and believable. Bringing up old injustices can stir up strong patriotic feelings or strengthen shared memories. At the same time, using religious stories can tap into deeply held beliefs, making people think there is divine approval or a connection to common values. This blend of history and religion makes their stories powerful and emotional, making them more effective. In Turkey, for example, the state authorities have disseminated victimhood narratives that largely rested on conspiracy theories and half-truths in order to legitimize their rule and quash dissent (Yilmaz and Shipoli, 2022). Research has noted that Islamic religious ideas and the reconstructed history of the Ottoman collapse have been strategically inserted into such narratives to elevate their influence among the Turkish masses (Yilmaz and Albayrak, 2021; Yilmaz and Demir, 2023).

                        Finally, it’s important to stress that these information operations aren’t always coordinated by automated bots or pre-planned campaigns. Sometimes, they happen naturally through implicit coordination among various participants, which makes the situation even more complex. Starbird et al.’s (2020) research demonstrates that online information operations involve active participation by human actors. The messages these operations spread are disseminated by utilizing online communities and various sources of information. As such SDIOs can be ‘cooperative’ endeavours in that they do not always rely on mere “bots” and “trolls,” but also encompass the contribution of online crowds (both knowingly and unknowingly) in the propagation of false information and political propaganda. For example, during the Russian information operations in the wake of the 2016 US Presidential elections, agents of the Internet Research Agency (RU-IRA) based in St. Petersburg worked together through the operation of more than 3.000 accounts that presented themselves as people and organizations belonging to the American political spectrum (such as the Black Lives Matter and the Patriotic Journalist Network). While undertaking such ‘orchestrated’ activity, the RU-IRA also managed to integrate organic communities by impersonating activists within those online communities, building networks within those communities, and even directly contacting ‘real’ activists. In some cases, RU-IRA agents directly collaborated with activists to organize physical protests in the US (see Walker, 2017).      

                        Goals of SDIOs

                        Illustration: Shutterstock.

                         

                        SDIOs span both national and international contexts, targeting domestic and foreign audiences through an array of tactics to achieve the political goals of their organizers. Looking at the domestic realm, SDIOs have influenced the functioning of the government and social and political institutions. In many instances, authoritarian governments use digital platforms to influence individuals’ opinions through stories, emotions, and viewpoints that are carefully designed to resonate with specific groups of the population. Their toolkit includes a range of elements, such as conspiracy theories that legitimize a government policy or deflect attention from a government failure, or that create doubt on the arguments of the opposition parties and social actors. Governments may also present narratives where they portray themselves as victims, manipulate facts, and spread distorted statements. For example, in Egypt, the government’s digital narratives have portrayed independent media outlets as agents of Western conspiracies designed to infiltrate and destroy the Egyptian social and political fabric. Similarly, the civilian presidential candidates against President Sisi have been labelled Western puppets created to destabilize Egypt (Michaelson, 2018). In China, the CCP government has used media management platforms such as iiMedia to control public opinion, including providing early warnings for ‘negative’ public opinions and helping guide the promotion of ‘positive energy’ online (Laskai, 2019). 

                        It must also be noted that these narratives, particularly those that employ victimhood tropes, are strategically employed to trigger various emotions among the masses. In Turkey, for example, the Erdogan regime has consistently abused a victimhood claim that rested mainly on the already-existing emotions of the masses such as envy, disgust, humiliation, hatred, anxiety, and anger (Yilmaz, 2021). These emotions are triggered and aroused by government elites as well as government-controlled media in order to legitimize the Erdogan regime’s authoritarian rule and deflect attention from its failures (see Yilmaz, 2021; Tokdogan, 2019). 

                        While both sets of actors pursue political goals through digital manipulation, there are certain differences between state and non-state actors when it comes to utilizing the SDIOs. On the one hand, the state actors tend to be well-resourced and possess good infrastructure of human and technological capital. They tend to have access to a range of digital tools to be used in domestic and foreign contexts, whether to silence the critics and legitimize their rule at home or destabilize their adversaries and extend their geopolitical influence abroad. They tend to carefully plan campaigns to infiltrate foreign information systems, reshape stories, and generate social conflicts, all of which take long-term thinking and strategic foresight. On the other hand, non-state actors, including hacktivist groups and extremist organizations, may lack resources but they tend to be more adaptable to new environments. They use digital platforms to promote their causes, attract supporters, and amplify their voices. These players manoeuvre through the digital world with agility, reflecting the changing nature of the medium.

                        Research has noted the implications of information operations for democratization as authoritarian and populist governments have leveraged digital media’s features to advance their political objectives. The calculated manipulation of digital platforms by these actors serves as a conduit for amplifying narratives that bolster their policies, worldviews, and perspectives. Authoritarian governments utilize digital censorship and surveillance to suppress dissenting voices and exert control over digital narratives. Populist leaders, in turn, harness the immediacy and interactive nature of social media to establish direct, emotional connections with their constituents, bypassing traditional gatekeepers (Perloff, 2021). By capitalizing on the resonance of online platforms, these actors perpetuate narratives that exploit societal grievances, positioning themselves as advocates for the marginalized while vilifying opposing viewpoints (Postill, 2018).

                        A Specific, International SDIO: Sharp Power

                        SDIOs undergo a transformation into tools of geopolitical orchestration and influence projection. In this context, digital strategies manifest as instruments designed to strike a chord with international audiences. They sow seeds of social and political division in target countries that perpetrators try to destabilize. These efforts generate support for both domestic and foreign policy objectives of the perpetrators, often exceeding the boundaries of the conventional notion of soft power and giving rise to what is termed “sharp power” (Walker, 2018). This variant of influence extends beyond the benign strategies commonly associated with “soft power,” taking on a more coercive character where “it seeks to pierce, penetrate, or perforate the political and information environment” (Walker, 2018: 12; Fisher, 2020; Elshaw and Alimardani, 2021). 

                        The emergence of “sharp power” has denoted a significant shift in the dynamics of external influence, as digital platforms are being used to coercively reshape geopolitical interactions between major powers such as the US, China, and Russia, as well as middle powers such as Australia, Turkey, and Egypt. For example, over the last decade, Australia, its public authorities, media entities, and civil society organizations have been systematically targeted by Chinese sharp power operations that included lavish donations to campaigns of useful political candidates, harassment of journalists, and spying on Chinese students in university campuses (The Economist, 2017). 

                        Social Impacts of SDIOs

                        The study of strategic information operations is not new as scholars noted the US and Soviet attempts at influencing each other’s information environment since the start of the Cold War (see Martin, 1982). Nonetheless, we note that the strategic information operations have been used mostly in two fields of study: military influence and social media analysis, with the political science literature mostly discussing the elements of the concept without fully operationalizing it. 

                        On the one hand, scholars working within military studies have rightly pointed out the strategic reasoning of information operations for international politics (see Rattray, 2001; Kania and Costello, 2018). For example, Kania and Costello (2018: 105) showed how the creation of the Strategic Support Force within the Chinese army structure was aimed at “dominance in space, cyberspace, and the electromagnetic domain,” thus generating synergy among these three domains, and building capacity for strategic information operations. States have also been manipulating the information environment to influence the internal affairs of their adversaries for decades. This has led to discussion of information operations as a potential threat to national security and stability (Hatch, 2019). 

                        On the other hand, those working on social media analysis have tried to explain how these information operations have been carried out in social media environments. Researchers have identified technical means through which sophisticated tools of manipulation have been put in place in platforms such as Twitter and Facebook that led to the spread of dis/misinformation (see Starbird et al., 2019). Among other things, this literature has also helped us to understand why certain pieces of information resonate with users and generate a response (such as those that are more surreal, exaggerated, impressive, emotional, persuasive, clickbait, and shocking images tend to generate better results).

                        The political science literature has noted various ways in which specific forms of mis/disinformation have affected political discussions in mostly democratic countries without utilizing the SDIOs as an umbrella term. In democratic contexts, the rapid dissemination of misinformation and divisive narratives poses a substantial threat, corroding informed decision-making and hindering the robust exchange of ideas. Trust, a cornerstone of functional democracies, becomes fragile as manipulation proliferates, eroding institutional credibility and undermining the fundamental tenets of democratic governance. For example, in the US, the Russian information operations around the 2016 Presidential Elections targeted key political institutions such as the political parties, the Congress, and the Constitutional Court through hacking, manipulative messaging, and social media campaigns, leading to erosion of trust among American citizens on these institutions (see Benkler et al., 2018).

                        While the literature covered such issues, we note that social aspects have not received as much discussion so far. We have seen that the SDIOs create significant social impact in terms of social cohesion, polarization, intergroup relations, and radicalization just to name a few. However, the literature’s discussion of these concepts has been limited to technical or political aspects. For example, when the literature examines polarization, they either try to demonstrate how these operations polarize the discourse on the internet, or they focus on political polarization (e.g. between the left and the right, or the majority and the minorities) (e.g., Howard et al., 2018; Neyazi, 2020) while overlooking the wider societal polarization and corruption. Moreover, we need further investigations into how social media platforms amplify the impact of information operations on group dynamics, specifically, whether the content on social media exacerbates polarization and reinforces group identities. This is premised on the fact that the impact of SDIOs extends beyond individual psychology, permeating the collective fabric of societies and democratic institutions. By exploiting digital platforms, these operations can foster polarization, exacerbate existing divisions, and undermine the foundations of social cohesion.

                        Impacts of SDIOs on Individual and Collective Emotions

                        Illustration: Shutterstock / Vchal.

                         

                        In the context of social issues, an important underexplored aspect is the emotional dimension. The SDIOs aim to provoke a wide range of emotions among their targets, including negative, positive, and ambivalent feelings. They aim to generate these emotional responses to achieve various political goals such as gaining support for their political causes, undermining opposing groups, eroding trust in society, marginalizing minority groups, and making people question the credibility of independent media outlets. These operations are usually planned to trigger specific emotional reactions that align with the intentions of the perpetrators. For example, Ghanem et al. (2020) found that the propagation of fake news in social media aims to manipulate the feelings of readers “by using extreme positive and negative emotions, triggering a sense of ‘calmness’ to confuse the readers and enforce a feeling of confidence.” However, we need further research to understand how such emotional responses generate social impacts such as intergroup resentment, xenophobic fear, and anger, potentially leading to societal dissent and upheaval. Conversely, positive emotions like empathy and camaraderie can foster social unity and rally support around social causes. Therefore, the strategic coordination of emotional experiences stands as an important dimension of SDIOs that needs further research.

                        The final underexplored area we want to emphasize pertains to the content of strategic narratives, including the social and political reasons behind their resonance within target societies. For example, in addition to the content of conspiracy narratives, new research needs to identify why and how certain narratives work in specific social contexts and not in others. Research needs to investigate how historical events, cultural norms, and collective memories shape the reception and resonance of strategic narratives. For instance, narratives that invoke historical grievances might gain traction in societies with unresolved historical conflicts. Further research can explore how strategic narratives tap into individuals’ sense of identity and belonging. Narratives that align with or reinforce a group’s identity can gain more resonance, as they validate existing beliefs and foster a sense of unity. 

                        Conclusion

                        In this paper, we introduced the concept of the Strategic Digital Information Operations (SDIOs), discussed the tactics and practices of the SDIOs, explained the main political goals of state and non-state actors in engaging with SDIOs at home and abroad, and presented avenues for new research. We highlighted that the concept of the SDIOs present a useful framework to discuss all forms of digital manipulation at both domestic and international levels organized by either state or non-state actors. We noted that while the literature has examined military-political impacts of the SDIOs, we still don’t know much about societal issues that the SDIOs influence such as intergroup relations, social cohesion, trust, and emotional resonance among target audiences. 

                        Understanding how audiences perceive and react forms the foundation for generating effective countermeasures against the harmful impacts of SDIOs. Initiatives aimed at promoting digital literacy, critical thinking, and the ability to discern media authenticity will empower individuals to navigate the potentially deceptive terrain of manipulated information. Additionally, creating transparency and accountability in algorithms that digital platforms use and rely on, along with dedicated fact-checking initiatives, will enhance the tools necessary to distinguish between truth and deceit. Furthermore, collaborative efforts involving governments, technology companies, and civil society entities can serve as a strong defense against the corrosive effects of manipulation, safeguarding the integrity of democratic discourse and the informed participation of citizens.

                        Finally, we note that the examination of SDIOs demands a comprehensive range of methodologies that arise from various disciplines including, quantitative and qualitative analysis that aims at revealing patterns of engagement and shifts in emotions, tracing the pathways of information dissemination, and mapping the networks of influence. Ethnographic investigations that delve into the personal experiences of participants can provide a human-centred perspective, showing the psychological, emotional, and cognitive dimensions of manipulation. Effective collaboration among technology experts, academic scholars, and policymakers can foster a deeper understanding of digital operations work and generate influence. 


                        Funding: This research was funded by Gerda Henkel Foundation, AZ 01/TG/21, Emerging Digital Technologies and the Future of Democracy in the Muslim World.


                        (*) Dr. Shahram Akbarzadeh is Convenor of Middle East Studies Forum (MESF) and Deputy Director (International) of the Alfred Deakin Institute for Citizenship and Globalisation, Deakin University (Australia). He held a prestigious ARC Future Fellowship (2013-2016) on the Role of Islam in Iran’s Foreign Policy-making and recently completed a Qatar Foundation project on Sectarianism in the Middle East. Professor Akbarzadeh has an extensive publication record and has contributed to the public debate on the political processes in the Middle East, regional rivalry and Islamic militancy. In 2022 he joined Middle East Council on Global Affairs as a Non-resident Senior Fellow. Google Scholar profile: https://scholar.google.com.au/citations?hl=en&user=8p1PrpUAAAAJ&view_op=list_works Twitter: @S_Akbarzadeh  Email: shahram.akbarzadeh@deakin.edu.au

                        (**) Dr Galib Bashirov is an associate research fellow at Alfred Deakin Institute for Citizenship and Globalization, Deakin University, Australia. His research examines state-society relations in the Muslim world and US foreign policy in the Middle East and Central Asia. His previous works have been published in Review of International Political Economy, Democratization, and Third World Quarterly. Google Scholar profile: https://scholar.google.com/citations?user=qOt3Zm4AAAAJ&hl=en&oi=ao  Email: galib.bashirov@deakin.edu.au


                        References

                        — (2017). “How China’s “sharp power” is muting criticism abroad.” The Economist, December 14, 2017. https://www.economist.com/briefing/2017/12/14/how-chinas-sharp-power-is-muting-criticism-abroad

                        Bakir, V. & McStay, A. (2018). “Fake news and the economy of emotions: Problems, causes, solutions.” Digital Journalism6(2), 154-175.

                        Benkler, Y., Faris, R. & Roberts, H. (2018). Network propaganda: Manipulation, disinformation, and radicalization in American politics. Oxford University Press.

                        Chen, X., Valdovinos Kaye, D. B. & Zeng, J. (2021). “#PositiveEnergy Douyin: Constructing “playful patriotism” in a Chinese short-video application.” Chinese Journal of Communication14(1), 97-117.

                        Dawoud, Khaled. (2023). “Power cuts in Egypt: A political liability for Sisi ahead of the upcoming elections.” Middle East Institutehttps://www.mei.edu/publications/power-cuts-egypt-political-liability-sisi-ahead-upcoming-elections(accessed on September 8, 2023).

                        Elswah, M. & Alimardani, M. (2021). “Propaganda Chimera: Unpacking the Iranian Perception Information Operations in the Arab World.” Open Information Science5(1), 163-174.

                        Fisher, A. (2020). Manufacturing Dissent: The Subtle Ways International Propaganda Shapes Our Politics (Doctoral dissertation, The George Washington University).

                        Ghanem, B., Rosso, P., & Rangel, F. (2020). “An emotional analysis of false information in social media and news articles.” ACM Transactions on Internet Technology (TOIT)20(2), 1-18.

                        Hatch, B. (2019). “The future of strategic information and cyber-enabled information operations.” Journal of Strategic Security12(4), 69-89.

                        Howard, P. N., Ganesh, B., Liotsiou, D., Kelly, J. & François, C. (2018). “The IRA, social media and political polarization in the United States, 2012-2018.” Project on Computational Propaganda. https://comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2018/12/The-IRA-Social-Media-and-Political-Polarization.pdf  

                        Ingram, H. J. (2015). “The strategic logic of Islamic State information operations.” Australian Journal of International Affairs69(6), 729-752.

                        Iwuoha, V. C. (2020). “‘Fake News’ and ‘Half-truths’ in the Framings of Boko Haram Narrative: Implications on International Counterterrorism Responses.” The International Journal of Intelligence, Security, and Public Affairs22(2), 82-103.

                        Kania, E. B., & Costello, J. K. (2018). “The strategic support force and the future of Chinese information operations.” The Cyber Defense Review3(1), 105-122.

                        Martin, L. J. (1982). “Disinformation: An instrumentality in the propaganda arsenal.” Political Communication2(1), 47-64. 

                        Mason, Max. (2022). “The price of propaganda: Inside Beijing’s mission to drown out dissent.” AFR. December 6, 2022. https://www.afr.com/technology/the-price-of-propaganda-inside-beijing-s-mission-to-drown-out-dissent-20220905-p5bfdv (accessed on September 8, 2023).

                        Michaelson, Ruth. (2018). “’No puppet’: last challenger in Egypt’s election says he’s more than a veneer of democracy.” The Guardian. March 20, 2018. https://www.theguardian.com/world/2018/mar/20/egypts-ghad-party-leader-makes-discreet-bid-for-presidency (accessed on September 8, 2023).

                        Mir, Asfandyar, Tamar Mitts and Paul Staniland. (2022). “Political Coalitions and Social Media: Evidence from Pakistan.” Perspectives on Politics, 1-20.

                        Neyazi, T. A. (2020). “Digital propaganda, political bots and polarized politics in India.” Asian Journal of Communication30(1), 39-57.

                        Perloff, R. M. (2021). The dynamics of political communication: Media and politics in a digital age. Routledge.

                        Postill, J. (2018). “Populism and social media: a global perspective.” Media, culture & society40(5), 754-765.

                        Rattray, G. J. (2001). Strategic warfare in cyberspace. MIT press.

                        Starbird, K., Arif, A. & Wilson, T. (2019). “Disinformation as collaborative work: Surfacing the participatory nature of strategic information operations.” Proceedings of the ACM on Human-Computer Interaction3(CSCW), 1-26.

                        Theohary, C. A. (2011). Terrorist use of the internet: Information operations in cyberspace. Diane Publishing.

                        Tokdoğan, N. (2020). “Reading politics through emotions: Ontological ressentiment as the emotional basis of current politics in Turkey.” Nations and Nationalism26(2), 388-406.

                        Walker, C. (2018). “What Is ‘Sharp Power’?” Journal of Democracy29, 9.

                        Walker, S. (2017). “Russian troll factory paid US activists to help fund protests during election.” The Guardian. October 17, 2017. https://www.theguardian.com/world/2017/oct/17/russian-troll-factory-activistsprotests-us-election (accessed on September 8, 2023).

                        Yilmaz, I. (2021). Creating the Desired Citizen: Ideology, State and Islam in Turkey. Cambridge University Press.

                        Yilmaz, I. & Albayrak, I. (2021). “Instrumentalization of Religious Conspiracy Theories in Politics of Victimhood: Narrative of Turkey’s Directorate of Religious Affairs.” Religions12(10), 841.

                        Yilmaz, I. & Demir, M. (2023). “Manufacturing the Ummah: Turkey’s transnational populism and construction of the people globally.” Third World Quarterly44(2), 320-336. 

                        Yilmaz, I. & Shipoli, E. (2022). “Use of past collective traumas, fear and conspiracy theories for securitization of the opposition and authoritarianisation: the Turkish case.” Democratization, 29(2), 320-336.