The emergence of repressive and authoritarian “hybrid regimes” poses one of the most significant threats to democracy today. These regimes and authoritarian actors wield information suppression and manipulation as essential tools to disseminate narratives that erode democratic institutions. This issue transcends national borders; digital technologies now enable authoritarian states to infiltrate robust democracies, allowing them to project their authoritarian narratives globally. The transnationalization of authoritarian politics, facilitated by digital technologies, presents substantial challenges to the integrity of democratic processes and institutions.
In response to these challenges, a workshop which is a collaborative effort organized on November 7-8, 2024, by the Alfred Deakin Institute for Citizenship and Globalisation (ADI) at Deakin University, Australia, and the European Center for Populism Studies (ECPS) in Brussels, Belgium. The workshop aimed to investigate how various actors—governments, non-state organizations, state-sponsored entities, and political parties—suppress and manipulate information to erode trust in democratic processes, both domestically and internationally. The workshop also examined the darker dimensions of social media, focusing on the interactions between misinformation, negativity, and polarization.
Moreover, the workshop addressed strategies to counter misinformation and disinformation, along with intervention techniques to mitigate their impacts. It also focused on countering disinformation through activism and explored everyday online experiences with misinformation, emphasizing the importance of evidence-based media literacy education initiatives. Additionally, the event discussed necessary curricular reforms to combat disinformation, toxicity, and polarization in educational contexts, as well as the responses of political elites to conspiracy theories.
The aim of the workshop, funded by the Australian Political Studies Association (APSA), the Australian Research Council (ARC), and the Gerda Henkel Foundation, is to deepen the understanding of these critical issues and explore collaborative strategies to combat misinformation and disinformation in our increasingly complex digital environment.
Round Table 1 – Foreign Interference Campaigns on Social Media: Insights from Field Theory and Computational Social Science
Keynote by Dr. Robert Ackland (Professor, The Australian National University)
Round Table 2 – Manipulating Truth: Authoritarian Strategies of ‘Attention Bombing’ and ‘Epistemic Modulation’ in Hybrid Media Systems
Keynote by Dr. Timothy Graham (Associate Professor, Queensland University of Technology)
Round Table 3 – The Dark Side of Social Media: Misinformation, Negativity, and Polarization
Keynote by Dr. Jason Weismueller (Assistant Professor, University of Western Australia)
Round Table 4 – The Influence of Familiarity and Identity Relevance on Truth Judgements
Keynote by Dr. Li Qian Tay (Postdoctoral Fellow, The Australian National University)
Round Table 5 – Countering State-Sanctioned Information Operations: The #FreeYouth Movement in Thailand
Keynote by Dr. Aim Sinpeng (Associate Professor, The University of Sydney)
Round Table 6 – Investigating Everyday Online Experiences with Misinformation and Responding with Evidence-Informed Media Literacy Education Initiatives
Keynote by Dr. Tanya Notley (Associate Professor, Western Sydney University)
Round Table 7 – Reforming the Curriculum to Counter Disinformation, Toxicity, and Polarization
Keynote by Dr. Mathieu O’Neil (Professor, The University of Canberra; Honorary Associate Professor, The Australian National University)
Round Table 8
Ignore, Rebut or Embrace: Political Elite Responses to Conspiracy Theories
Keynote by Dr. Zim Nwokora (Associate Professor, Deakin University)
And
Disinformation in the City Response Playbook
Keynote by Dr. Jessica (Ika) Trijsburg (Research Fellow in City Diplomacy at the Melbourne University)
Closing Remarks
By Dr. Lydia Khalil (Program Director, Transnational Challenges at Lowry Institute)
Explore the insightful discussions from the International Conference on ‘Digital Complexity and Disinformation in the Indo-Pacific,’ held on September 25-26, 2024, in a hybrid format from Melbourne. This conference brought together a diverse coalition of experts, hosted by leading institutions across the Indo-Pacific and Europe, including the Alfred Deakin Institute at Deakin University, Universitas Indonesia, National Research and Innovation Agency (BRIN), Universitas Gadjah Mada, Universitas Muhammadiyah Malang, International Islamic University Malaysia, UIN Salatiga, and the European Center for Populism Studies (ECPS).
The conference delved into how digital technologies, though transformative, have become tools for disinformation, political manipulation, and digital authoritarianism, posing serious challenges to democracy and social unity. This issue is particularly urgent in the Indo-Pacific, where misinformation on platforms like Facebook, Twitter, Instagram, Telegram, and WhatsApp has fueled divisions and where political forces sometimes restrict access to vital digital spaces to consolidate control.
Attendees, including scholars, practitioners, and policymakers, shared perspectives on how digital disinformation affects the region and discussed strategies for promoting digital literacy, inclusivity, and democratic resilience.
Generously supported by the Australian Research Council, Gerda Henkel Foundation, ECPS, and the Alfred Deakin Institute, this conference aimed to foster collaboration and shed light on countering disinformation in today’s digital age.
Don’t miss the opportunity to engage with these compelling sessions—watch the full conference videos here:
Miguel De Vera, Anton; Hamaiunova, Viktoriia; Koleszár, Réka & Pasquettaz, Giada. (2024) “Future Resilience of the European Technology Security.” Policy Papers. European Center for Populism Studies (ECPS). November 4, 2024. https://doi.org/10.55271/pop0004
Abstract
This paper explores vulnerabilities in the European Union’s technological security, focusing on Huawei as a case study to illuminate broader security challenges. Amid intensifying US-China tensions, especially under former US President Donald Trump, the EU encountered new risks linked to the strategic positioning of Chinese tech firms within critical European infrastructure. Trump’s “America First” policy targeted China with tariffs and trade restrictions to address perceived unfair practices, triggering disruptions in global supply chains that reverberated through the EU economy. For Europe, heavily reliant on secure, stable trade flows, these events highlighted the urgency of reassessing technological dependencies and reinforcing digital security. The paper presents a series of strategic recommendations for the EU to mitigate such vulnerabilities, emphasizing the need for diversified supply chains, rigorous security standards for tech partnerships, and collaborative policies among EU members to strengthen resilience in the face of geopolitical shifts and technological competition.
Authored by Anton Miguel De Vera, Viktoriia Hamaiunova, Réka Koleszár & Giada Pasquettaz
Introduction
In the increasingly uncertain geopolitical climate, the European Union (EU) is facing the challenge of maintaining its technological resilience while protecting its security and autonomy. The fast-paced international competition for technological leadership is closely tied to the bloc’s economic competence and has consequences for its security. Given the importance of transatlantic cooperation in this domain, the upcoming US elections, and the possibility of a second Trump administration should urge policymakers to focus on strengthening the EU’s preparedness. This paper addresses the existing vulnerabilities in the EU’s technological security through the exemplary case of Huawei and outlines recommendations on how to tackle them.
Connectivity, one of the critical technologies of the rapid Fourth Industrial Revolution, has been at the center of heated discussions in recent years. Several nations identified connectivity to be an essential part of their competitiveness and development and, among others, Huawei emerged at the forefront of advanced technologies. The Chinese-owned ICT provider was among the world leaders in rolling out their next-generation telecommunication networks worldwide. Within the EU, the choice of 5G providers has generated crucial debates. Next to the obvious economic interests, building telecommunication networks came with important security considerations. As the US-China rivalry intensified under President Trump, the EU faced an important vulnerability.
Donald Trump’s trade war with China, a key component of his “America First” agenda, had significant repercussions for the EU. By imposing tariffs on Chinese goods, Trump sought to counter what he perceived as unfair trade practices by China. This conflict disrupted global trade and impacted the EU’s economy, which is heavily dependent on stable supply chains.
For the EU, the escalating US-China trade tensions presented both challenges and opportunities. While the trade war resulted in market volatility, it also provided Europe with a chance to strengthen its trade relationships with China. The two reached an agreement in principle on a comprehensive agreement on investment (CAI) in 2020 – although it was later put on hold due to the tit-for-tat sanctions. The prospect of deepening ties with China posed a risk of straining transatlantic relations, particularly as Trump urged European nations to collaborate with the US in pressuring Beijing. Trump’s populist trade policies thus compelled the EU to carefully balance its relationships with both the US and China while prioritizing its own economic and security interests. It is in this context that the debate around Huawei and the EU’s technological security is situated in.
The EU’s 5G Rollout: Rhetoric Coercion and Uneven Progress
The European Commission identified the possibilities of 5G early on and adopted an action plan in 2016 to launch 5G services in all member states by the end of 2020 (European Commission, 2024). Although some experts warned that the EU is falling behind in technological transformation, member states quickly began catching up and published their roadmaps. However, progress was uneven and fragmented (5G Observatory Quarterly Report 2, 2019). At that time, Huawei was in a prime position in the European market to support the 5G rollout and was already working with several European providers. By 2019, the Chinese company signed memorandums of understanding with wireless providers in at least 9 EU countries, including Germany, Spain, and France (5G Observatory, 2021). For many, it seemed evident that for the EU to stay competitive and meet the plans for 5G coverage, Huawei was the answer.
In parallel, however, concerns about the security of Huawei equipment began circulating. Against the backdrop of the escalating trade war between the US and China, the former began prompting allies to exclude Huawei from their networks (Woo & O’Keeffe, 2018). President Trump labelled Huawei a security risk and threatened to cut off intelligence and information-sharing with allies using the ‘untrustworthy’ 5G vendor (Business Standard, 2020).
US Policy towards China under Donald Trump: Framing as a Strategic Tool
Donald Trump’s political rise is often analyzed through the lens of populism and framing theory, both of which help explain his appeal and communication strategies. Populism, broadly defined, refers to a political approach that pits the “common people” against a perceived corrupt elite (Mudde, 2004). Trump’s rhetoric embodies this populist style, as he frequently claims to speak for ordinary Americans against the political establishment. His 2016 campaign, for instance, centered on “draining the swamp” in Washington, positioning himself as an outsider who would challenge entrenched elites. During the 2024 election, he is still using this populist communication, by portraying himself as “one of the people”, like in one of his recent tweets where he works for one shift in McDonalds.
One of the key aspects of Trump’s populism is his use of framing. He does not only use it on a national level for criticizing his opponents but also in relation to foreign policy issues. Framing theory, as defined by Entman (1993), involves highlighting certain aspects of a reality while downplaying others, effectively shaping how an issue is understood by the public. Trump’s framing of China is a prime example. Throughout his presidency and during his campaigns, Trump consistently framed China as a threat to American economic interests and national security. By doing so, he shaped public discourse and channeled public frustrations about job losses and trade imbalances into hostility toward China.
A prominent example of Trump’s framing of China came during his trade war with the country. He portrayed China as an “unfair” player in global trade, accusing it of “stealing” American jobs and intellectual property. In a 2019 speech, Trump stated, “China has taken advantage of the United States for many, many years. And those days are over.” This framing was effective in galvanizing his political base, particularly among working-class voters who felt economically marginalized by globalization (Inglehart & Norris, 2016). By framing the issue as a battle between patriotic Americans and a foreign adversary, Trump reinforced his populist credentials.
Trump’s framing of China intensified during the COVID-19 pandemic, where he repeatedly blamed China for the spread of the virus, referring to it as the “China virus” and the “Kung flu” (The New York Times, 2020). By doing so, he shifted public discourse to portray China as responsible not only for the economic challenges faced by the US but also for the public health crisis, a narrative that resonated with many of his supporters.
A notable example of this framing came in March 2020, when Trump tweeted, “The United States will be powerfully supporting those industries, like Airlines and others, that are particularly affected by the Chinese Virus.” This statement reported widely in the media, sparked accusations of racism and xenophobia (CNN, 2020). However, Trump defended his rhetoric, arguing that it was necessary to hold China accountable for the pandemic’s global spread. His framing successfully linked the frustrations over COVID-19 to broader concerns about China’s role in the world economy, feeding into his populist narrative of protecting American interests.
Framing theory is particularly relevant here because it highlights how political actors shape public perception by focusing on certain narratives. As Entman (2007) notes, framing involves selecting some aspects of a perceived reality and making them more salient in communication. Trump’s framing of China as both an economic competitor and a national security threat played a significant role in justifying his tariffs and aggressive foreign policy stance. Moreover, Trump’s use of this frame was amplified by the media, contributing to rising anti-China sentiments in the US (Goffman, 1974).
By framing China as a direct threat to American prosperity, Trump not only advanced his populist message but also reshaped political discourse, making foreign policy a central issue for many voters. Through this, he created the basis of US trade policy against foreign companies deemed as a threat and towards allies who seemed hesitant to follow this approach.
With all this, the EU faced a two-fold dilemma: giving in to Trump’s strategy and losing out on competitiveness while appearing to have little strategic autonomy or seizing the opportunities with Huawei but straining the transatlantic relationship while potentially endangering critical infrastructure. As of 2024, the EU’s answer has been fragmented and disunited. Only 10 of the 27 member states have excluded Huawei and although almost all states put in place some kind of restrictions, only a handful of them implemented it (European Commission, 2023a). President Trump’s approach of pressuring allies and threatening to cut off intelligence-sharing may have been counterproductive, but it exposed an important weakness of the EU.
What Next – The Way Forward
With the US elections approaching, the EU has a window of opportunity to address this dilemma. The possibility of a second Trump administration brings the risk of further aggravating the US-China ties and putting the EU into an even more uncomfortable position. The war in Ukraine has heightened the EU’s need and dependence on intelligence-sharing with the US Upcoming challenges in transatlantic relations are likely to have significant repercussions for the EU’s security. At the same time, the EU-China relations are also at a heightened risk of entering into a trade war as the latest developments around the export of Chinese electric vehicles demonstrate. The economic vulnerability of certain European member states to Chinese pressure adds another dimension to the complex nature of achieving united European approaches. Essentially, the EU needs to safeguard its autonomy against unilateral actions while maintaining its competitiveness and ensuring the security of its critical infrastructure. To do that, policymakers should consider the following scenarios and the presented policy recommendations.
If Trump Wins
First, in case of a Trump victory, Europeans have to embrace another period of uncertainty. A second Trump Administration will renew concerns about US support for NATO while the protectionist policies will put direct pressure on transatlantic trade relations. It is expected that President Trump will continue his previous hardline approach towards China leading to an intensified trade war and a bigger volume of Chinese exports being dumped on the European market. All the while, Europeans will increasingly be pulled into a trade and technology war with the Eastern power amid calls from the US to reduce relations. In this scenario, Trump’s rhetorical pressure, as in the previous case of calling to exclude Huawei from the 5G rollout to maintain intelligence-sharing, might turn into actual policies. In 2025, this would come with a huge price given the EU’s dependence on the American intelligence infrastructure to help Ukraine defend itself against Russia’s war. Any threats thus must be taken seriously and addressed accordingly.
Next to that, internally, Trump’s success would galvanize far-right, populist figures and movements. His ideological allies in Europe, such as Hungarian Prime Minister Viktor Orbán, Italian Prime Minister Giorgia Meloni and Polish President Andrzej Duda would be emboldened to continue their path after a Trump victory. Far-right, populist politicians would find renewed reassurance to oppose more European integration. Consequently, reaching unity on crucial foreign policy questions might further be hindered.
Faced with the prospect of this challenging situation, European policymakers would do well to address the potential pitfalls early on. Given the foreseeable fragmentations, the EU must strengthen and implement the framework it already has agreed upon (such as the 5G Cybersecurity Toolbox and the Digital Services Act). According to the latest assessment of the 5G Toolbox, which was adopted to mitigate security risks, only 10 out of the 27 Member States have restricted or excluded high-risk suppliers from their 5G networks (European Commission, 2023b). Based on its own and Member States’ independent analyses, the European Commission considers Huawei along with another Chinese company, ZTE, to ‘pose materially higher risk than other 5G providers.’ Dependency on these providers for critical infrastructure, which the 5G network is considered, creates a serious risk across the Union. Considering the level of interconnectedness between EU networks, a fragmented policy could jeopardize the entire bloc’s security. For instance, last year Hungary’s Minister of Foreign Affairs, Péter Szijjártó highlighted Hungary’s development of 5G networks with the help of Huawei, next to signing additional cooperation agreements with the company (Szijjártó Péter, 2023).
To address the diverging approaches, the EU should develop a mechanism to actively encourage Member States to implement the existing framework and use the available tools. It should also hold Member States accountable for doing so. Considering the weight of risks in the EU’s technological security, policymakers should call for an EU-wide regulation with clear and urgent deadlines. This would support the EU’s autonomy in making security-related decisions as assessments of risks are done both by Member States and by the European Commission. Transatlantic relations are likely to become more friendly as a result and the EU’s security would increase. One of the downsides of this approach, however, is the expected response from Beijing. China is likely to retaliate for a European policy naming and restricting its companies from the market. Besides, reaching this agreement on a European level will not be easy as Member States’ security priorities and relations with China differ significantly. Nevertheless, this approach offers the EU a starting point to be a proactive actor.
If Harris Wins
If Americans choose a Harris administration for the next four years, the EU would find itself in a similar position as they were during Biden’s administration assuming that Harris will take up a similar approach against China. Despite their opposition to each other, President Joe Biden had taken a similar approach to his Republican predecessor. Biden ordered heavy tariffs on Chinese imports of high-tech items such as semiconductor chips while diversifying its sources for imports such as the EU and Mexico (Davis, 2024; Lovely et al., 2024). In doing so, the United States has become less dependent on China for all types of imported manufactured goods since 2018, according to recently released 2023 customs data (Lovely et al., 2024).
The EU and China, however, have maintained or increased their reliance on each other for almost all types of imported goods” (Lovely et al., 2024). As such, the EU could potentially clash with the US by maintaining this dependence which showcases some form of limited autonomy. On the one hand, the EU exercises its agency to shift towards maintaining and deepening ties with China. However, on the other hand, the EU’s agency is somewhat limited given its trade dependency with China which may compel it to act in favor of Beijing on certain issues.
A Harris administration would likely maintain the use of tariffs, particularly targeting China, to counter perceived unfair competition as emphasized by Trump, and to drive progress in the US energy transition, supporting its emissions reduction goals. This was evident during the presidential debate between Harris and Trump in September 2024. She highlighted Trump’s failed attempt to subdue China as an economic powerhouse arguing that “under Donald Trump’s presidency, he ended up selling American chips to China to help them improve and modernize their military” (Butts, 2024). She concluded with the statement, ″[he] basically sold us out when a policy about China should be in making sure the United States of America wins the competition for the 21st century” (Butts, 2024). This comment indicates to the EU and other US allies that Harris is likely to continue Biden’s approach if she wins the presidential race.
In this scenario, the EU faces a more predictable transatlantic landscape. This, however, may prove more perilous. Albeit Harris will follow a hardline approach to China and the pressure on allies to not share advanced technology with Beijing will remain, she is unlikely to strongly push the EU. In contrast to the Trump administration, instead of coercive rhetoric, she is likely to use softer means of persuasion. This carries in itself the risk that the EU will sit on its hands for too long instead of addressing the legitimate security threats that China poses. To ensure that the resilience of technological security remains a priority, the European Parliament should establish a sub-committee of the Committee on Industry, Research and Energy (ITRE). The sub-committee should deal with the security considerations that come with technologies and equipment from third countries and should ensure that the interests of European citizens are considered in tech security-related questions. This would address the risks of de-prioritization and would contribute to enhanced and more nuanced debates. Considering the viewpoints of Members of the Parliament directly through the sub-committee could help the European Commission to propose regulations that are more likely to enjoy support. The only constraining factor to consider is the budget of setting up the sub-committee but the importance of this issue should outweigh that.
Conclusion
This paper highlighted the importance of European technology security and looked at different scenarios European leaders will face during the US presidential election. The example of the rollout of the 5G technology in the EU and the debates around using Chinese Huawei as the technology provider illustrated the EU’s vulnerability when it comes to maintaining its autonomy and competitiveness in the tech sector. In the rapidly changing global landscape, EU leaders are facing a crucial dilemma about the way forward. To maintain technological competitiveness, the EU may have no choice but to rely on Chinese partners while to ensure the continent’s security and stability, it cannot afford to alienate its key transatlantic partner. At the same time, legitimate security risks should not be overlooked and considered as subordinate to trade relations.
This paper offers a concise depiction of the main factors EU leaders should consider as Americans head to the polls. In either scenario, what is crucial for the EU is to be prepared and engage in collective planning. A second Trump administration is likely to bring about a more hectic and turbulent period. His framing of China as a security threat could lead to more pressure on European allies to cut ties with Beijing while his victory could galvanize European populists making it harder to achieve consensus on the European level. To offset this, the paper recommends taking concrete steps to implement the already existing framework and strengthen the available toolbox. In case of a Harris victory, the EU can expect reasonable continuity. Perhaps an important challenge the bloc will face will be finding the impetus to keep the technology security issue in focus. The paper argues that one way to do that would be to set up a dedicated sub-committee within the European Parliament to keep the issue on the agenda and ensure the interests of European citizens.
Authors’ Biographies
Anton Miguel De Verais an MA student in International Business and Economic Diplomacy at IMC FH Krems. He previously earned a bachelor’s degree in Philosophy, Politics, and Economics from Central European University in Vienna, where he specialized in International Relations and Economics. His thesis examined the dynamics of Philippine agency within the US-Philippine security alliance and its nuanced relationship with China, entitled “The Faces of Philippine Agency in Foreign Affairs: The Philippines and the United States Security Alliances”. Currently based in Vienna, Anton works at Raiffeisen Bank International, where he combines his academic expertise with practical experience in finance and international relations.
Viktoriia Hamaiunova is a Ph.D. candidate at Newcastle University (UK), where she investigates the role of legal culture in shaping fair trial standards within ECHR member states, focusing on the integration of mediation into judicial systems to enhance human rights protections. Her research combines doctrinal and non-doctrinal approaches, incorporating thematic analysis and insights from interviews with ECtHR judges to examine how legal culture influences judicial reform and access to justice. Viktoriia Hamaiunova holds an MA in International Law and Human Rights from the University of Tartu, enriched by academic exchanges at Masaryk University and Comenius University. Her legal career includes in-house experience and ECtHR traineership. An accredited mediator and published author, Viktoriia Hamaiunova has presented her work at prominent conferences, including SLSA Annual Conference and the Human Rights Law Conference at the University of Cambridge. With extensive teaching experience, she leads discussions on topics spanning international law to mediation practices. As an interdisciplinary researcher, Viktoriia Hamaiunova is committed to culturally informed legal reforms, fostering development and facilitating discussions on effective judicial systems and dispute resolution.
Réka Koleszáris an independent researcher focusing on the relations between the European Union and Asia, in particular East Asia. Her experience spans international organizations and think tanks including working for the Council of the European Union and the European Policy Centre. Réka holds an MSc in Political Science from Leiden University, an MA in International Relations specializing in East Asian studies from the University of Groningen, and a diploma in the Art of Diplomacy from the European Academy of Diplomacy.
Giada Pasquettazis a doctoral student at the Chair of Political Science and International Politics of Prof. Dr. Dirk Leuffen since October 2023. Her interests are mainly in political communication, international relations, political behavior, comparative politics and quantitative methods. She holds a master’s degree in mass media and politics with a focus on international social movements’ communication from the University of Bologna. She also completed her bachelor’s degree in Sociology at the University of Bologna with a specialization in migration frames used in media. She completed semesters abroad at the University of Sundsvall (Sweden), at UCLouvain (Belgium) and at the UIT Tromsø (Norway).
Lovely, M. E. & Yan, J. (2024, August 27). “While the US and China decouple, the EU and China deepen trade dependencies.” PIIE. https://www.piie.com/blogs/realtime-economics/2024/while-us-and-china-decouple-eu-and-china-deepen-trade-dependencies
Inglehart, R., & Norris, P. (2016). “Trump, Brexit, and the rise of populism: Economic
Trump, D. J. [@realDonaldTrump]. (2020, March 16). The United States will be powerfully supporting those industries, like Airlines and others, that are particularly affected by the Chinese Virus [Tweet]. X.https://x.com/realDonaldTrump/status/1239685852093169664
The emergence of repressive and authoritarian “hybrid regimes” poses one of the most significant threats to democracy today. These regimes and authoritarian actors wield information suppression and manipulation as essential tools to disseminate narratives that erode democratic institutions. This issue transcends national borders; digital technologies now enable authoritarian states to infiltrate robust democracies, allowing them to project their authoritarian narratives globally. The transnationalization of authoritarian politics, facilitated by digital technologies, presents substantial challenges to the integrity of democratic processes and institutions.
In response to these challenges, our workshop aims to investigate how various actors—governments, non-state organizations, state-sponsored entities, and political parties—suppress and manipulate information to erode trust in democratic processes, both domestically and internationally. The workshop will also examine the darker dimensions of social media, focusing on the interactions between misinformation, negativity, and polarization.
The workshop, a collaborative effort organized by the Alfred Deakin Institute for Citizenship and Globalisation (ADI) at Deakin University, Australia, and the European Center for Populism Studies (ECPS) in Brussels, Belgium, will also address strategies to counter misinformation and disinformation, along with intervention techniques to mitigate their impacts. It will focus on countering disinformation through activism and explore everyday online experiences with misinformation, emphasizing the importance of evidence-based media literacy education initiatives. Additionally, the event will discuss necessary curricular reforms to combat disinformation, toxicity, and polarization in educational contexts, as well as the responses of political elites to conspiracy theories.
The organizing team, led by Professor Ihsan Yilmaz, encourages all participants to actively engage in discussions and share insights throughout the workshop. The aim of the workshop, funded by the Australian Political Studies Association (APSA), the Australian Research Council (ARC), and the Gerda Henkel Foundation, is to deepen the understanding of these critical issues and explore collaborative strategies to combat misinformation and disinformation in our increasingly complex digital environment.
We are pleased to announce that the International Conference on ‘Digital Complexity and Disinformation in the Indo-Pacific’ is scheduled to take place on September 25-26, 2024, both online via Zoom and in person in Melbourne. This significant conference is a collaborative effort organized by the Alfred Deakin Institute for Citizenship and Globalisation (ADI) at Deakin University, Australia; the Department of Communication Sciences at Universitas Indonesia (UI); the Institute of Social Sciences and Humanities at the National Research and Innovation Agency (BRIN); the Faculty of Social and Political Sciences at Universitas Gadjah Mada (UGM); the Faculty of Social and Political Sciences at Universitas Muhammadiyah Malang (UMM); the Department of Political Science at Kulliyyah of Islamic Revealed Knowledge and Human Sciences, International Islamic University Malaysia (IIUM); the State Islamic University (UIN) Salatiga; and the European Center for Populism Studies (ECPS) in Brussels, Belgium.
While digital technologies have revolutionized many aspects of our societies, the promises of inclusivity and progress they bring often do not align with the realities on the ground. These technologies are increasingly being exploited as tools for disinformation, political manipulation, and even digital authoritarianism, posing significant challenges to democratic values and social cohesion.
The Indo-Pacific region is particularly susceptible to these challenges, as disinformation and misinformation spread rapidly across digital platforms such as Facebook, Twitter, Instagram, Telegram, and WhatsApp, exacerbating societal divisions. Moreover, political actors often leverage these platforms to silence criticism, control information flows, and even restrict access to critical digital infrastructure to consolidate their power.
The discussions during this international conference will explore the complex interactions between digital technologies, cyberspace, social media platforms, and political dynamics in the region. Scholars, practitioners, and policymakers from various institutions will offer insights into the impacts of digital disinformation and explore pathways to counter these challenges while promoting digital literacy and inclusivity.
This conference is made possible through the generous support of the Australian Research Council (ARC) via a Discovery Project grant, the Gerda Henkel Foundation, the European Center for Populism Studies (ECPS), and the Alfred Deakin Institute (ADI), all of which are committed to fostering academic inquiry into this pressing global issue. The conference aims to serve as a platform for fruitful discussions and meaningful collaboration, enabling us to better understand digital complexity and its implications for democracy in the Indo-Pacific.
Kenes, Bulent & Yilmaz, Ihsan.(2024). “Digital Authoritarianism and Religious Populism in Turkey.” Populism & Politics (P&P). European Center for Populism Studies (ECPS). September 14, 2024. https://doi.org/10.55271/pp0042
Abstract
This article explores the interplay between religious populism, religious justification and the systematic attempts to control cyberspace by the Justice and Development Party (AKP) in Turkey. Drawing from an array of scholarly sources, media reports, and legislative developments, the study unravels the multifaceted strategies employed by the ruling AKP to monopolize digital media spaces and control the information published, consumed and shared within these spaces. The narrative navigates the evolution of the AKP’s tactics, spotlighting the fusion of religious discourse with state policies to legitimize stringent control mechanisms within the digital sphere. Emphasizing the entwinement of Islamist populism with digital authoritarianism, the article provides evidence of the strategic utilization of religious platforms, figures, and media outlets to reinforce the narrative of digital authoritarianism as a protector of Islamic values and societal morality. Key focal points include the instrumentalization of state-controlled mosques and religious institutions to propagate government narratives on digital media censorship, alongside the co-option of religious leaders to endorse control policies. The article traces the rise of pro-AKP media entities and the coercive tactics used to stifle dissent, culminating in the domination of digital spaces by government-aligned voices. Furthermore, the analysis elucidates recent legislative endeavors aimed at further tightening the government’s grip on social media platforms, exploring the potential implications for free speech and democratic discourse in the digital realm.
Keywords: Digital Authoritarianism, Religious Populism, Media Control, Islamism, Digital Governance, Cyberspace, Fatwas, Sermons
The rise of religious populism and authoritarianism marks Turkey’s political trajectory under Erdoganism, in which the ruling Justice and Development Party (AKP) has transformed the nation’s governance since 2002. The aftermath of Kemalism brought with it a paradoxical quest for modernization within a less-than-democratic framework. The AKP’s ascent heralded a shift, initially portraying pro-democratic sentiments, but is now defined by authoritarian leanings akin to those of the Kemalist regime. This metamorphosis mirrors global trends that have witnessed authoritarian governance seeping into democratic systems.
The distinctiveness of Erdoganism lies in its merging of Islamist populism into Turkey’s political fabric, fostering electoral authoritarianism, neopatrimonialism, and populism. AKP leader and Turkish President Recep Tayyip Erdogan’s centralized authority converges the Turkish state, society, and governmental institutions, perpetuating a widespread sense of uncertainty, fear, and trust in a strong leader that bolsters authoritarianism. The dynamics of religion, state, and identity construction redefine Turkey’s sociopolitical landscape, with governmental activities aimed at constructing a ‘pious generation’ while diminishing voices of dissent (Yabanci, 2019).
The political landscape in Turkey, particularly under the rule of the AKP, has witnessed a discernible shift marked by increasingly stringent measures against various segments of society. This trend notably encompasses a wide spectrum of individuals, including political opposition factions, minority groups, human rights advocates, academics, journalists, and dissenting voices within civil society (Westendarp, 2021; BBC News, 2020; BBC News, 2017a; Homberg et al., 2017).
Statistics paint a stark picture of the government’s crackdown: alarmingly, more than 150 thousand individuals have faced dismissals from their positions, while over 2 million people have become subjects of “terrorism investigations” following a coup attempt in the country in 2016 (Turkish Minute, 2022). Furthermore, approximately 100 thousand arrests have been documented since the onset of these measures in 2016. The widespread erasure of oppositional or critical voices – real or potential – extends beyond the targeting of individuals and encompasses entire institutions. Academic institutions have borne the brunt of this oppressive regime, resulting in the closure of more than 3 thousand educational establishments, and the dismissal of 6 thousand scholars. The media sector has also suffered a significant blow, with 319 journalists arrested and 189 media outlets forcefully shut down, signaling a profound attack on free speech and the press. The legal profession has also faced targeting, witnessing the loss of 4 and a half thousand legal professionals (Turkey Purge 2019).
Moreover, the AKP’s influence has transcended national borders, impacting Turkish citizens living in diasporas around the world. Instances of extradition of members of the Turkish diaspora on charges related to terrorism or alleged connections to security threats have been reported, highlighting the government’s efforts to exert control beyond its territorial boundaries. This phenomenon has led to the perception of the government as possessing “long arms,” capable of reaching, influencing, and punishing individuals even when living outside the country (Edwards, 2018).
The evolution of Turkey’s digital landscape since 2016 reveals a pronounced shift marked by intensified security protocols and offline repressions. A critical assessment conducted by Freedom House, evaluating global internet freedom between 2016 and 2020, highlights a concerning and tangible decline in internet freedom in Turkey, which significantly intensified following the failed coup attempt in 2016. Notably, the classification of internet freedom as being “not free” underscores the severity of limitations imposed during these years (Daily Sabah, 2021a, 2021b; World Bank, 2021).
Pervasive Online Presence of Turkish Citizens
Despite this lack of freedom, statistics highlight the pervasive influence of the internet within Turkish society (World Bank, 2021). A study from the initial quarter of 2021 indicated that over 80 percent of internet users were consistently active online during these three months, highlighting the integral role the internet plays in the lives of citizens (Daily Sabah, 2021). Data reports tracking internet usage of Turkish citizens suggest that in early 2024, internet penetration in Turkey was at its highest level at 86.5 percent (Kemp, 2024). These findings demonstrate a picture of sustained and pervasive digital engagement within the populace.
Social media findings further underscore the influence of internet usage, revealing an average daily duration of 7 hours and 29 minutes per individual (Bianet, 2020). By January 2024, the number of social media users in the country stands at 57.5 million users, or nearly 70 percent of the total population (Kemp, 2024). Social media platforms, including Facebook, YouTube, WhatsApp, Instagram, TikTok, Snapchat, and Twitter, account for this considerable online presence (Bianet 2020).
Crucially for this discussion, this digital landscape has become a vital arena for dissenting voices, particularly as traditional media outlets witness declining audience numbers.
Consequently, the internet has emerged as a potent tool for voices of opposition within Turkey. In response to the increased possibilities for these voices in an increasingly online society, the AKP government has initiated various regulatory and surveillance measures aimed at controlling and monitoring the digital sphere, reflecting efforts to suppress dissenting narratives and oppositional voices (Bellut, 2021). Their efforts at digital governance reflect and intensify the government’s broader strategy of curtailing dissent across various levels of society.
The AKP’s Use of Religion to Legitimize a Digital Authoritarian Agenda
The intertwining of religion and state under the AKP’s governance has legitimized and fortified its digital authoritarianism. For example, a recent trend reveals the government’s adept use of Islamic discourse to rationalize the imposition of censorship and crackdowns on online opposition, portraying control over digital technology as a safeguard for Turkish values and moral rectitude. The strategic operationalization of religious values as a legitimizing force for digital authoritarianism is highly indicative of the AKP government’s efforts at consolidating power and suppressing opposition within the online sphere, profoundly shaping the contours of digital discourse and expression in Turkey.
Central to this strategy is the dissemination of Islamic values through state-managed religious institutions, traditional media, and social media platforms, all serving as conduits for aligning public sentiment with the government’s digital autocratic agenda. The propagation of Islamic tenets has been instrumental in molding public opinion to favor the government’s stringent and increasingly authoritarian approach to digital governance. In an effort to increase legitimacy and garner wider support, religious leaders and organizations have been strategically co-opted to support the government’s digital authoritarian agenda.
The cumulative effect of the integration of religion and digital governance has created a pervasive climate of censorship and self-censorship online. Individuals are discouraged from expressing dissenting views or disseminating information that could be perceived as contradictory to religious principles. This climate of caution and apprehension consequently serves to inhibit free expression and discourse within the digital realm, by not only fortifying the government’s authoritarian stance but also influencing the behavioral patterns of online users, curtailing the free flow of information and divergent opinions.
By adopting an interdisciplinary approach encompassing political science, religious studies, media analysis, and socio-political discourse, the paper aims to provide a comprehensive and empirically informed understanding of how religious justification has been systematically employed to legitimize methods of controlling voices of dissent online and foster a pro-AKP narrative in Turkey’s digital governance landscape.
This analysis will contribute to a deeper comprehension of the complex interplay between religion, politics, and digital authoritarianism in contemporary Turkey. This study will highlight how the ruling AKP fuse religion with the state’s digital agenda. It will also demonstrate their reliance on a network of religious platforms, figures, and media to reinforce the narrative of digital authoritarianism as a means of upholding Islamic values and protecting societal morality. The confluence of religious influence and governmental objectives, it will be argued, serves to shape public opinion and garner support for stringent control measures within the digital realm.
Religious Populism of Erdoganism and the AKP’s Authoritarianism
Since the country’s formation in 1923, Turkey has never been perceived as a highly democratic country from the perspective of Western libertarianism. Its initial phase featured a sort of national reconstruction from the worn-out centuries of the Ottoman Empire, which had faced a humiliating defeat at the hands of the Allied forces in World War One (WWI) towards the Republic. The Young Turks, who later became the Kemalists, set the country on a path of reformation with paradoxical ideas of modernization. While the country moved from a centuries-old monarchy to a parliamentary system, it remained far from democratic (Yilmaz’ 2021a). Between 1923 and 1946, Turkey was ruled by the Kemalist Republican People’s Party (CHP) alone. Even following the commencement of multi-party elections, the Turkish political and institutional landscape continued to be dominated by Kemalists until the AKP rose to power. The only exception was a brief period between 1996 and 1997 when Necmettin Erbakan and his right-wing Milli Gorus’s (National View) inspired Welfare Party (RP) held office (Yildiz, 2003).
The transition from Kemalism to Erdoganism, President Erdogan’s political ideology, was meticulously orchestrated, consolidating the state narrative and silencing opposing voices. The AKP initiated significant constitutional changes, starting with a referendum aimed at removing the Kemalist judiciary from power, and the Ergenekon and Sledgehammer trials which targeted key Kemalist military figures (Kuru, 2012: 51). Although these trials did not conclusively prove the accused’s ‘anti-state’ intentions, they significantly swayed public opinion against Kemalist control of the judiciary and military.
The 2010 Turkish Constitutional Referendum overwhelmingly favored the AKP, seeking increased control over the judiciary and military (Kalaycioglu, 2011). As a result, the outcome expanded parliamentary and presidential authority over appointments to the Constitutional Court and the Supreme Board of Judges and Prosecutors (HSYK), enabling the AKP government to install its own appointees. This marked the end of Kemalist dominance in these institutions and paved the way for AKP influence – and an increasingly authoritarian agenda.
The AKP’s authoritarianism is distinguished from Kemalism by its adept blending of Islamist populism into its political discourse and agenda. While Kemalists championed secularism and Turkish nationalism, Erdoganists espouse an iron-fisted Islamist ideology rooted in the legacy of the Ottoman Empire. This has birthed a new form of autocracy known as “Erdoganism” (Yilmaz & Bashirov, 2018), characterized by four pivotal elements: electoral authoritarianism, neopatrimonialism, populism, and Islamism. (Yilmaz & Turner, 2019; Yilmaz & Bashirov, 2018).
The socio-political landscape of Turkey has experienced a rapid decline, from an initially promising image of democratization to an authoritarian posture of governance with the ascent of AKP in 2002. The AKP’s transition from a seemingly pro-democracy to an authoritarian party has come to resemble the Kemalist tradition of violating democratic freedoms and rights (Yilmaz & Bashirov, 2018). Today, the public presence of the military, arbitrary crackdowns and arrests are now normalized activities of the Turkish state.
Erdogan’s dominant persona has resulted in the centralization of power around his leadership. This was particularly evident following the 2017 Constitutional Referendum, which transitioned the country into a Presidential system. Under this concentration of power, Erdoganism brought about an assimilation of the Turkish nation, state, and its economic, social, and political institutions (Yilmaz & Bashirov, 2018). By positioning himself as a referent object, Erdogan reinforces his grip on power while redefining the contours of Turkish identity, politics, and, as will be developed in this paper, the relationship between religion and the state (Yilmaz, 2000; Yilmaz, 2008, Yilmaz et al., 2021a; Yilmaz & Erturk 2022, 2021; Yilmaz et al., 2021b).
Co-opting of Religious Authorities and the Diyanet to Support AKP’s Authoritarian Agenda
President Erdogan has solidified the politicoreligious ideology of Erdoganism by fostering a close alliance with Turkey’s official religious authority, the Directorate of Religious Affairs (Diyanet). Initially established in 1924 by the Kemalist regime to centralize religious activities and advocate for a ‘secular’ form of Turkish Islam, Diyanet’s role has significantly expanded since the ascent of the AKP and has transformed to accommodate the party’s political Islamist identity.
This relationship is reflected in the increased budget allocation to the religious authority. The Turkish government’s 2023 budget proposal notably elevated Diyanet’s budget by 117 percent (Duvar, 2022). This influenced a substantial increase in funding grants, financial incentives and the heightened prestige of religious leaders and prominent imams. In return, Diyanet extends its loyalty and political support, including aligning with the AKP on digital policy and governance. President Erdogan strategically appoints pro-government religious figures such as Ali Erbas now President of Diyanet, to influential positions. Erbas, recognized for his religious conservatism, has cultivated a close relationship with President Erdogan and endorsed his call for a new Constitution (Martin, 2021).
Erbas’ conspicuous presence in public and political affairs underscores the intimate rapport between him and Erdogan. For instance, during the inauguration of the new Court of Cassation building, attended by President Erdogan, Erbas led a prayer praising its new location (Duvar, 2021). Additionally, Erbas represented President Erdogan at the funeral of Islamic cleric Yusuf al-Qaradawi, a supporter of Erdogan and the Muslim Brotherhood, in 2022 (Nordic Monitor, 2022). The building of ties between members of the government and the religious organization strengthens Diyanet’s role not just as a religious institution but also as a significant political force.
The Erdogan/AKP government has harnessed religious institutions, in particular mosques, to disseminate its positions and policies to the broader public through sermons, religious teachings, and various activities. A content analysis spanning from 2010 to 2021 reveals that Diyanet-run Friday sermons mirror the political stance of the AKP. These sermons were found to support Turkey’s involvement in the Syrian conflict, while vilifying ‘FETOists’—referring to the Gulen movement accused of terrorism. This analysis showcases how Diyanet employs affective religious rhetoric to endorse Erdogan’s decisions, discourage opposition, vilify perceived adversaries, propagate fear and conspiracies, and divert attention from the government’s shortcomings in areas spanning foreign policy, economics, and beyond (Yilmaz & Albayrak, 2022; Rogenhofer & Panievsky, 2020).
The Diyanet, has significantly expanded its media presence since 2010, operating television and radio channels, with an escalating expenditure on publicity. The organization and its leader Erbas also have an active presence and significant following on social media platforms such as YouTube, Twitter, and Facebook (Yilmaz & Albayrak, 2022). This heightened outreach has effectively filled the void created by the purge of groups like the Gulen movement and critical academic voices, both in the digital sphere and beyond (Yilmaz & Albayrak, 2022; Andi et al., 2020; Parkinson et al., 2014).
The close alliance between the Diyanet and the AKP has seen the past two heads of the organization employing faith-based justifications to support Erdogan’s moral campaign against perceived ‘internal’ and ‘external’ adversaries (Andi et al., 2020; Parkinson, et al., 2014). The increasingly stringent control over the digital sphere is justified by Diyanet with Islamic framing and justification. Thus, the emotionally charged narratives instrumentalized by the AKP (Yilmaz & Bashirov, 2018) have become directly intertwined with the religious directives and stances of the Diyanet (Yilmaz & Albayrak, 2022; Yilmaz et al., 2021a; Rogenhofer & Panievsky, 2020). Diyanet extends its influence not only within Turkish territories but also among the Turkish diaspora, functioning as an advisor for the AKP in diaspora communities. Consequently, through the transnational reach of the religious organization, the AKP’s authoritarian agenda has transcended national borders.
The Diyanet’s Moral Stance Against Social Media
Under the Presidential system, the President of Diyanet, appointed by Erdogan, wields significant influence as the centralized religious authority in Turkey and globally through its network of mosques (Danforth, 2020). Former President of Diyanet, Mehmet Gormez, openly criticized social media, attributing various societal harms to it. In 2016, Diyanet organized a forum titled “Social Media and the Family in the Context of Privacy,” aligning with the government’s calls for social media control. The forum aimed to emphasize traditional family values and discuss the perceived negative impact of social media on privacy and marriage. Gormez advocated for Diyanet to create a social media catechism, reinforcing the ideological harmony between Diyanet and Erdogan’s regime, consolidating authoritarianism both online and offline (Yilmaz & Albayrak, 2022; Yilmaz et al., 2021a; Danforth, 2020).
Diyanet has also actively engaged in efforts to exert stronger control over social media by publishing a booklet titled “Social Media Ethics,” using Islam as a guiding principle for this framework (Duvar, 2021). In the preface he personally authored, top imam Ali Erbas cautioned readers about the omnipotent governance of God extending to social media activities under Islamic law. Additionally, believers were alerted to the perils of “fake news” and urged to create a “world of truth” (Duvar, 2021; Turkish Minute, 2021).
Moreover, Diyanet’s Friday sermons have increasingly addressed themes related to social media, technology, and morality. On January 17, 2020, a sermon titled ‘Technology Addiction and Social Media Ethics’ was circulated by Diyanet, cautioning people about the dangers of the Internet violating the five fundamental values of Islam. It highlighted that the indiscriminate use of technology poses threats to human health, causes financial losses, erodes human dignity through unethical behaviors, undermines human faith with radical ideologies, and impairs cognitive abilities (Diyanet, 2020).
The Role of Islamic Scholars in Legitimizing the AKP Digital Authoritarian Agenda
Within academia, several pro-AKP Islamic scholars have aligned themselves with the government’s digital authoritarian agenda. Figures like Nihat Hatipoglu and Hayrettin Karaman (Kenes, 2018), associated with the AKP, believe that social media spreads misinformation targeting Turkish national interests and could mislead youth. Since 2016, Karaman, who has advised Erdogan on creating a more Islamist – and less tolerant – society has frequently accused social media of being used by “anti-Turkey” groups to spread lies (Yeni Safak, 2013). He highlights the dangers of false information being spread on these platforms, claiming that there’s no room for rebuttal (Yeni Safak, 2021). A poem written by Karaman supports AKP’s stance on social media, advocating for increased control to cultivate a “pious youth” and suppress critical remarks aimed at the AKP (Yeni Safak, 2020).
Nihat Hatipoglu, a prominent pro-AKP Turkish academic and theologian, has utilized his ATV show to issue fatwas, cautioning viewers about the potential sins associated with social media usage. For instance, he warns that engaging with “questionable” individuals on these platforms can lead to false rumors and sin, and accountability will come in the afterlife (Akyol, 2016). His messaging is potent in digital governance because it moves beyond conventional vices like alcohol or adultery and highlights the significance of sins associated with online behaviors and consumption, such as false testimonies and envy.
Furthermore, both Karaman and Hatipoglu are openly critical of “Western” media and social platforms, and advocate for Islamic content. Together, they represent a prevalent viewpoint supporting AKP discourse that emphasizes caution and adherence to Islamic principles while engaging with digital platforms.
Digital Authoritarian Measures Against the LQBTQ+ Community
The intersection of religion, politics, and social media in Turkey has also created a complex landscape where certain communities, particularly LGBTQ+ groups, have faced significant challenges. Religious leaders and government officials have used their platforms to vilify LGBTQ+ activists and communities, contributing to a hostile environment for these individuals (Greenhalgh, 2020).
This hostility has significantly deepened with anti-LQBTQ+ messaging from Turkish leadership. President Erdogan’s agenda has consistently focused on promoting a “pious youth” while openly expressing disapproval of atheists and LGBTQ+ identities as threats to societal and religious values (Gall, 2018). His party has employed rhetoric targeting Western values and certain youth groups, framing them as corruptive influences on Turkey’s future.
Although identifying as LGBTQ+ is not illegal in Turkey, the government has taken steps to restrict LGBTQ+ content and activism online (Woodward, 2019). This included censoring LGBTQ+ content on platforms like TikTok and imposing restrictions on advertising across social media channels to suppress opposition groups (Euronews, 2021).
Moreover, there have been instances of attempts to ban LGBTQ+ content, such as Netflix being prohibited from airing a movie with an LGBTQ+ storyline, and the mobilization of hashtags advocating for bans on LGBTQ+ content such as #LGBTfilmgunleriyasaklansin (#BanLGBTFilmDays); #İstiklalimizeKaraLeke (#StainOnOurIndependence) (Banka, 2020; Sari, 2018). These actions reflect the charged anti-LGBTQ+ sentiment prevalent in certain spheres of Turkish society and the state’s efforts to curtail LGBTQ+ visibility in the media and online discourse.
Government efforts at controlling and silencing LGBTQ+ members have clear repercussions in society. For example, influencing the demonization of LGBTQ+ youth during the Bogazici University protests in 2021 and subsequent limitations on LGBTQ+ content across various platforms (Kucukgocmen, 2021; Woodward, 2019; Euronews, 2021).
AKP’s Digital Network Control, Restrictions, and Bans
The Gezi Park protests in 2013 marked a turning point for the Turkish government’s efforts at controlling the digital landscape. During this period, civil society groups and activists turned to social media to coordinate the protests, prompting the government to denounce Twitter as a significant threat to society. Internet governance subsequently tightened, and internet blackouts were orchestrated by the newly established Telecommunication Technologies Authority (BTK) under government directives. While the government justified these internet restrictions as anti-terrorism measures, their political motives were evident.
The pinnacle of Turkish government internet shutdowns occurred between 2015 and 2017. This was facilitated by Internet Law No. 5651, introduced in 2007, permitting website blocking on multiple grounds, including for terrorism-related content. The broadened definition of “terrorism” that had been enacted by the Erdogan regime was manipulated to silence dissenting voices and serve the interests of the ruling power. Gradually, the scope of a “terrorist” in Turkey expanded to encompass peaceful protesters from events like the Gezi Park protests, anti-government activists labelled as “FETOists,” and students involved in activism during Istanbul’s Bogazici University events in 2021 (Wilks, 2021; Yesil et al., 2017).
Internet Law 5651 thus became a tool to marginalize digital spaces for non-AKP or critical groups, using the power of the TIB (Telecommunication and Information Technology Authority) and imposing additional responsibilities on hosting services and intermediaries. The 2014 amendment to the Law on State Intelligence Services granted the National Intelligence Service (MIT) authority to gather, record, and analyze public and private data, compelling intermediaries to comply with MIT’s requests under the threat of incarceration (Human Rights Watch, 2014).
The eastern regions of Turkey, particularly areas with strong Kurdish resistance, bore the brunt of internet and cellular shutdowns during critical events like the 2015 Suruc suicide bombing and the 2016 Ataturk Airport bombing. These shutdowns were often localized and imposed during high-risk security incidents. The government’s increasingly authoritarian approach leveraged digital anti-terrorism laws to target marginalized groups, particularly the Kurds. It is noteworthy that most shutdowns occurred in the southeast, where political activities are more prevalent. For instance, the 2016 closure of internet and landlines in 11 cities following the arrests of Diyarbakir’s mayor and co-mayor sparked protests and incurred significant economic costs for Turkey (Yackley, 2016).
Although internet shutdowns decreased from six in 2016 to one in 2020, the financial toll remains substantial, reaching $51 million in 2020 (Buchholz, 2021). While the precise role of religious justification and religious organizations in legitimizing comprehensive network governance remains unclear, their collaboration remains crucial to the government. It also plays a significant role in legitimizing various forms of digital governance and actions taken by the government – such as these internet shutdowns – that undermine democratic and digital freedom principles.
Digital Oppression Through the ‘Safe Use of the Internet’ Campaign
The 2011 “Safe Use of the Internet” campaign initiated by the Telecommunication Technologies Authority (BTK) promoted a Turkish-built filter called the ‘family filter.’ However, despite its name, the campaign primarily focused on regulating internet access in public spaces like cafes and libraries, rather than imposing ‘safe’ restrictions within domestic settings. The campaign purported to protect children from accessing non-age-appropriate content by blocking adult websites, both foreign and domestic. Interestingly, this campaign didn’t enforce mandatory installation of the ‘family filter’ at home, seemingly placing the responsibility on parents to supervise their children’s internet use. Discussions about children’s privacy were also notably absent from the campaign despite the stated objective (Hurriyet Daily News, 2014; Brunwasser, 2011).
Over time, concerns have emerged regarding the broader implications of the ‘family filter.’ Many speculate that this initiative, while supposedly aimed at blocking pornographic content, also serves as a tool for the state to censor critical voices within the digital space (Yesil et al., 2017). The criteria for blacklisting websites remain ambiguous, granting significant power to state authorities. By 2017, approximately 1.5 million websites had been blocked, particularly in public areas like cafes. The BTK has concerningly refrained from disclosing the list of websites it restricts (Yesil et al., 2017). The lack of transparency has contributed to concerns about digital oppression and censorship orchestrated by the AKP through the guise of protecting children and youth online.
AKP’s Digital Authoritarianism: Sub-Network, Website and Platform Level
The Internet Law (No. 5651) described above has facilitated the monitoring and blocking of webpages and websites in Turkey. Despite amendments, the law remains problematic due to its arbitrary and vague provisions. Internet governance institutions hold broad discretion in determining acceptable versus unacceptable content. According to Freedom House’s latest report, internet freedoms in Turkey have been increasingly restricted in recent years (Freedom House, 2021).
In 2006, prior to the introduction of the Internet Law, only four websites were blocked in Turkey. However, by 2008, this number had escalated to 1,014, reaching a staggering 27,812 in 2015. Government decisions using this law lack transparency and accountability, as blocking orders, often issued by the BTK, lack clear justifications, leaving website owners with limited recourse for appeal. Suspicion and precautionary measures are sometimes the sole reasons cited for blocking a website.
Following the 2016 coup attempt in Turkey, websites related to the Gulen movement, Gezi Protests, corruption allegations, and terrorism charges were blocked or taken down (Ergun, 2018). Government actions also targeted websites advocating opposition, Kurdish rights, LGBTQ+ rights, and pornography. Several news outlets, including Zaman and Today’s Zaman, were shut down in 2016. Websites promoting atheism, such as the Atheism Association, were also blocked under Article 216 of the Turkish Penal Law, which prohibits actions inciting hatred or enmity among people (Hurriyet Daily News, 2015).
Digital Control at the Proxy or Corporation Level
The politicization and framing of the July 2016 events by Erdogan and the AKP as an assault on Turkish sovereignty triggered severe digital restrictions. The disbandment of the TIB over alleged pro-Gulenist ties led to the transfer of its powers to the Information and Communication Technologies Authority (BTK). Consequently, approximately 150 online and traditional media outlets were completely shut down, resulting in the loss of jobs for 2,700 Turkish journalists (Kocer & Bozdag, 2020). The legal framework governing digital spaces in Turkey has been wielded against opposition and civil society voices while favoring AKP and pro-AKP groups.
Social media intermediaries operating in Turkey have faced various restrictions. According to the Internet Law, they are required to comply with the Turkish government’s requests or face bans. During a period of heightened discontent against the AKP in 2014, the TIB pressured Twitter, YouTube, and Facebook to remove critical content damaging to the ruling party. While Facebook swiftly complied, Twitter and YouTube faced national blockades for several hours before eventually complying with the requests (Yesil et al., 2017). In 2016, Google also adhered to thousands of content removal requests from the Turkish state (Yesil et al., 2017).
The 2019 Transparency Reports from Twitter and Facebook shed light on Turkey’s extensive governmental demands for information and content removals. Twitter was issued with 350 information requests involving 596 accounts, and 6,073 removal requests affecting 8,993 accounts. The report indicated a compliance rate of 5 percent. Turkey was number one on the list for the highest number of legal demands for removals. Meanwhile, Facebook received 2,060 legal requests and 2,537 user information requests, complying with 73 percent of these requests (Freedom House, 2021).
Adding to this overall picture of digital surveillance and control, Turkey has imposed bans on approximately 450,000 domains, 140,000 URLs, and 42,000 tweets (Timuçin, 2021). IFOD announced on August 7, 2024, that by the end of the first quarter of that year, a total of 1,043,312 websites and domain names had been blocked in Turkey, based on 892,951 decisions from 833 different institutions and courts. The organization highlighted that this number could rise as more domain names are identified (IFOD, 2024). Furthermore, in 2017, Wikipedia was banned in Turkey following a ruling from Ankara’s first Criminal Court, linking certain articles to terror organizations. The court mandated edits to the articles before allowing the website to resume being accessible in the country in 2020 (Hurriyet Daily News, 2020; The Guardian, 2017).
The Turkish government’s manipulation of news and entertainment content distribution is a well-documented strategy, implemented through its control over media outlets both locally and internationally. Beyond influencing social media and restricting local websites, additional methods of control are exercised over television, streaming and various over-the-top media services (OTTs). In 2019, the government empowered the Radio and Television Supreme Council (RTUK) to issue licenses and make them mandatory to access content streaming in Turkey (Pearce, 2019; Yerlikaya, 2019).
The Turkish government has also employed various financial penalties, including fines and heavy taxes, to curb critical voices and hinder their independent operations. These tactics have forced many critical media outlets out of business, enabling pro-government entities to acquire their assets. For instance, the pro-government Demiroren Group acquired the Dogan Media Group following high taxes imposed by the government. Anadolu Ajansi (AA), enjoying government support, has significantly increased its backing for the AKP government by 545 percent since 2002, with 91.1 percent of its Twitter coverage found to favor the government. The government’s informal means of bolstering pro-government content include shutting down anti-government entities and transferring or selling their outlets or platforms to pro-government supporters, establishing a clientelist relationship between the state and media (Yilmaz & Bashirov, 2018). For example, during the state of emergency in 2016, the Gulen-linked Samanyolu Group, Koza Ipek Group, and Feza Publications were seized and redistributed to President Erdogan’s loyalists (Timucin, 2021; BBC News, 2016; Yackley, 2016).
Digital Authoritarianism at the Network-Node or Individual Level
The Turkish government has intensified its crackdown on individual social media and online activities, particularly following the 2016 coup attempt. The Ministry of Interior, for example, reported investigations on over ten thousand individuals for their online engagements, resulting in legal action against over 3,700 and the arrest of more than 1,600 people. Within a two-month span between January and March 2018, over 6,000 social media accounts were probed, leading to legal consequences for over 2,000 individuals. Freedom House’s 2021 assessment further revealed that between 2013 and 2018, the government initiated over 20,000 legal cases against citizens due to their social media activities (Ergun, 2018).
A climate of self-censorship among Turkish internet users has become entrenched. This is owing to multiple actions and crackdowns taken by the government in recent years. Following the coup attempt, for example, academics and civil society voices were targeted by pro-AKP media outlets that alleged their involvement in “terrorism” (GIT North America, 2016). Journalists have faced a diminished space to express dissenting opinions and face being accused of or charged with terrorism under various legal articles, including Article 314/2, related to association with armed organizations, and Article 147 and Article 5, concerning crimes associated with terrorist intent and groups (Sahinkaya, 2021). The restriction of anti-AKP voices has heavily tilted mainstream conversation in favor of pro-AKP narratives, dominating both online and offline domains.
The Turkish government actively suppresses dissent on social media, resorting to threats and arrests against individuals. In a 2014 incident, a Turkish court ordered Facebook to block pages and individuals engaging with content from Charlie Hebdo, a French magazine that published a cartoon insulting Prophet Muhammad (Johnston, 2015). The Director of Communications of the Presidency warned citizens in May 2020 that even liking or sharing a post deemed unacceptable by the government could lead to trouble. Journalists, scholars, opposition figures, and civil society leaders critical of the government are increasingly vulnerable to prosecution.
The AKP’s influence in the digital public sphere is also notable in its internet trolling and online harassment campaigns, which are aimed at shaping narratives in favor of the party and against the opposition. Critics of the AKP, including journalists, academics, and artists, face a culture of “digital lynching and censorship” perpetrated by an army of party-affiliated trolls (Bulut & Yoruk, 2017). Post-2016, this situation has worsened, subjecting critical voices to intensified cyberbullying and making their persecution more challenging (Shearlaw, 2016). Many of these trolls are graduates of pro-AKP Imam Hatip schools and reportedly receive a payment. Successful trolls likely receive additional benefits from pro-AKP networks, including the TRT and Turkcell (Bulut & Yoruk, 2017). In addition to employing trolls, the AKP also uses automated bots to amplify its presence in the digital space, disproportionately projecting their narrative across platforms (Irak & Ozturk, 2018).
The manipulation of social media platforms across the globe has become a significant concern, and this is particularly the case in Turkey. In 2020, Twitter’s deletion of a substantial number of accounts from China, Russia, and Turkey revealed the extent of propaganda spread by these accounts. Many were focused on supporting President Erdogan, attacking opposition parties, and advocating for undemocratic reforms (Twitter Safety, 2020). The proliferation of fake accounts and bots, and the significant portion of posts originating from these accounts, has skewed the representation of daily Twitter (renamed as X) trends, and consequently affected political discourse.
Disturbingly, instances of online harassment and hate speech targeting individuals based on their political stance or ethnic background have been observed without effective intervention. For instance, Garo Paylan, an HDP deputy with Turkish-Armenian heritage, faced online harassment for his political stance during the Azerbaijan-Armenian skirmish in 2020 (Briar, 2020). Meanwhile, controversial statements, such as Ibrahim Karagul’s suggestion of ‘accidentally’ bombing Armenians, didn’t receive the same scrutiny for hate speech (Barsoumian, 2020).
Conclusion
The merging of religion and the state’s digital authoritarian agenda serves as a potent tool for steering public opinion, validating control mechanisms, and fortifying the government’s authority. It exemplifies how the discourse of upholding Islamic values and societal morality can be strategically harnessed to garner support for stringent digital control measures, influencing public perception and behavior within the digital landscape.
This article identifies numerous ways the AKP and its leader, administer their authority over the digital realm in Turkey. Voices of dissent and opposition are silenced through the enactment of a range of legislative and strategic measures, such as Internet Law No.5651, the “Safe Use of the Internet” campaign, and online trolling and harassment practices that directly target critics of the government. Additionally, the AKP make considerable attempts at controlling the online content its citizenry can or want to access; the discussion highlights the internet lockdowns, blacklisting of websites, and issuing warnings to Turkish citizens of the consequences of engaging with certain (oppositional) content.
The above measures are supported and legitimized by the AKP and Erdogan’s religious discourse, and through its network of pro-AKP religious authorities including the Diyanet, Islamic scholars and preachers. By aligning digital control measures with Islamic values and societal morality, the government can justify its actions as essential for preserving the ethical fabric of society. This moral grounding lends an air of legitimacy and righteousness to measures that might otherwise be viewed as intrusive or oppressive.
The fusion of religious rhetoric with digital governance acts as a deterrent to dissent. The government discourages dissenting voices by associating opposition to these measures with a departure from religious principles, fostering a climate of self-censorship and compliance within the digital sphere.
Religious institutions, particularly Diyanet, are heavily influential in conversations about social media ethics and endorsing greater control over digital spaces, leading to an Islamization of digital spaces. Strict limitations on blasphemy and criticism of Islamic beliefs curtail freedom of expression online.
Ultimately, the combination of information and content control, legal measures, religious influence, and online manipulation creates a challenging scenario for digital governance in Turkey. These various elements work together to shape narratives, control dissent, create a pervasive environment of censorship and self-censorship, and restrict freedoms in the digital realm, impacting the country’s broader socio-political landscape.
— (2020). “Turkey court jails hundreds for life for 2016 coup plot against Erdogan.” BBC News. November 26, 2020. https://www.bbc.com/news/world-europe-55083955 (accessed on August 18, 2024).
Andi, S.S.; Erdem A. & Carkoglu, A. (2020). “Internet and social media use and political knowledge: Evidence from Turkey.” Mediterranean Politics (Frank Cass & Co.). 25(5), 579–599. https://doi.org/10.1080/13629395.2019.1635816
Bulut, E. & Yoruk, E. 2017. “Digital Populism: Trolls and Political Polarization of Twitter in Turkey.” International Journal of Communication. 11, 4093–4117.
Danforth, N. (2020). “The Outlook for Turkish Democracy: 2023 and Beyond.” Washington Institute for Near East Policy. https://www.washingtoninstitute.org/media/632 (accessed on August 12, 2024).
Elmas, Tugrulcan. (2023). “Analyzing Activity and Suspension Patterns of Twitter Bots Attacking Turkish Twitter Trends by a Longitudinal Dataset.” ArXiv. April 16, 2023. https://arxiv.org/abs/2304.07907 (accessed on August 23, 2024).
GIT North America. (2016). “Pro-AKP Media Figures Continue to Target Academics for Peace.” Jadaliyya. April 27, 2016. https://www.jadaliyya.com/Details/33208 (accessed on August 23, 2024).
Irak, D. & Ozturk, A. E. (2018). “Redefinition of State Apparatuses: AKP’s Formal-Informal Networks in the Online Realm.” Journal of Balkan and Near Eastern Studies. 20(5), 439–458. https://doi.org/10.1080/19448953.2018.1385935
Kalaycioglu, E. (2012). “Kulturkampf in Turkey: The Constitutional Referendum of 12 September 2010.” South European Society & Politics. 17(1), 1–22. https://doi.org/10.1080/13608746.2011.600555
Kocer, S. & Bozdag, C. (2020). “News-Sharing Repertoires on Social Media in the Context of Networked Authoritarianism: The Case of Turkey.” International Journal of Communication. 14: 5292-5310. https://ijoc.org/index.php/ijoc/article/view/13134
Kuru, A.T. (2012). “The Rise and Fall of Military Tutelage in Turkey: Fears of Islamism, Kurdism, and Communism.” Insight Turkey. 14(2): 37–57.
Rogenhofer, J. M. & Panievsky, A. (2020). “Antidemocratic populism in power: comparing Erdoğan’s Turkey with Modi’s India and Netanyahu’s Israel.” Democratization. 27(8), 1394–1412. https://doi.org/10.1080/13510347.2020.1795135
Timucin, Fatma. (2021). 8-Bit Iron Fist: Digital Authoritarianism in Competitive Authoritarian Regimes: The Cases of Turkey and Hungary. (Master’s thesis, Sabanci University, Istanbul, Turkey). https://research.sabanciuniv.edu/42417/(accessed on August 21, 2024).
Yabanci, B. (2019). Work for the Nation, Obey the State, Praise the Ummah: Turkey’s Government-oriented Youth Organizations in Cultivating a New Nation. Ethnopolitics, 20(4), 467–499. https://doi.org/10.1080/17449057.2019.1676536
Yesil, B.; Efe, K.Z. & Khazraee, E. (2017). “Turkey’s Internet Policy After the Coup Attempt: The Emergence of a Distributed Network of Online Suppression and Surveillance.” Internet Policy Observatory. https://repository.upenn.edu/internetpolicyobservatory/22 (accessed on August 25, 2024).
Yildiz, A. (2003). “Politico-Religious Discourse of Political Islam in Turkey: The Parties of National Outlook.” The Muslim World (Hartford). 93(2), 187–209. https://doi.org/10.1111/1478-1913.00020
Yilmaz, I. (2000). “Changing institutional Turkish-Muslim discourses on modernity, West and dialogue.” Congress of The International Association of Middle East Studies (IAMES). Freie Universitat Berlin, Germany, October 2000.
Yilmaz, I. (2008). “Influence of Pluralism and Electoral Participation on the Transformation of Turkish Islamism,” Journal of Economic and Social Research. 10(2): 43-65.
Yilmaz, I. (2009a). “Predicaments and Prospects in Uzbek Islamism: A Critical Comparison with the Turkish Case.” USAK Yearbook of International Politics and Law, 2: 321-347.
Yilmaz, I. (2009b). “An Islamist Party, Constraints, Opportunities and Transformation to Post-Islamism: The Tajik Case.” Uluslararası Hukuk ve Politika. 5(18), 133–147.
Yilmaz, I. (2009c). “Socio-Economic, Political and Theological Deprivations’ Role in the Radicalization of the British Muslim Youth: The Case of Hizb ut-Tahrir.” European Journal of Economic and Political Sciences 2(1): 89-101.
Yilmaz, I. (2009d). “Was Rumi the Chief Architect of Islamism? A Deconstruction Attempt of the Current (Mis)Use of the Term ‘Islamism’.” European Journal of Economic and Political Studies. 2(2): 71-84.
Yilmaz, I. (2010a). “A Comparative Analysis of Anti-Systemic Political Islam: Hizb ut-Tahrir’s Influence in Different Political Settings (Britain, Turkey, Egypt and Uzbekistan).” In: Michelangelo Guida and Martin Klus (eds) Turkey and The European Union: Challenges and Accession Perspectives. Bruylant, 253-268.
Yilmaz, I. (2016b). “The Experience of the AKP: From the Origins to Present Times.” In: Alessandro Ferrari and James Toronto (eds) Religions and Constitutional Transitions in the Muslim Mediterranean: The Pluralistic Moment. Abingdon, Oxon; New York, NY: Routledge, 2016, 162-175.
Yilmaz, I. (2018a). “Islamic Populism and Creating Desirable Citizens in Erdogan’s New Turkey.” Mediterranean Quarterly. 29(4): 52-76. https://doi.org/10.1215/10474552-7345451
Yilmaz, I. (2019c). “Potential Impact of the AKP’s Unofficial Political Islamic Law on the Radicalisation of the Turkish Muslim Youth in the West.” In: Mansouri F., Keskin Z. (eds) Contesting the Theological Foundations of Islamism and Violent Extremism. Middle East Today. Palgrave Macmillan, Cham, 2019, 163-184. https://doi.org/10.1007/978-3-030-02719-3_9
Yilmaz, I. (2020a). “Islamist Populism in Turkey, Islamist Fatwas and State Transnationalism.” In: Shahram Akbarzadeh (ed) The Routledge Handbook of Political Islam, 2nd Edition. London and New York: Routledge.
Yilmaz, I. (2021a). Creating the Desired Citizens: State, Islam and Ideology in Turkey. Cambridge and New York: Cambridge University Press.
Yilmaz, I. (2021b). “Islamist Populism in Turkey, Islamist Fatwas and State Transnationalism.” In: Shahram Akbarzadeh (ed) The Routledge Handbook of Political Islam, 2nd Edition, 170-187. London and New York: Routledge.
Yilmaz, I. & Albayrak, I. (2022). Populist and Pro-Violence State Religion: The Diyanet’s Construction of Erdoğanist Islam in Turkey.Singapore: Palgrave Macmillan.
Yilmaz, I.; Demir, M. & Morieson, N. (2021a). “Religion in Creating Populist Appeal: Islamist Populism and Civilizationism in the Friday Sermons of Turkey’s Diyanet.” Religions. 12: 359. https://doi.org/10.3390/rel12050359/
Yilmaz, I.; Shipoli, E. & Demir, M. (2021b). “Authoritarian resilience through securitization: an Islamist populist party’s co-optation of a secularist far-right party.” Democratization. 28(6), 1115–1132. https://doi.org/10.1080/13510347.2021.1891412
Yilmaz, I. & Erturk, O. (2022). “Authoritarianism and necropolitical creation of martyr icons by Kemalists and Erdoganists in Turkey.” Turkish Studies. 23(2), 243–260. https://doi.org/10.1080/14683849.2021.1943662
Yilmaz, I. & Erturk, O. F. (2021). “Populism, violence and authoritarian stability: necropolitics in Turkey.” Third World Quarterly. 42(7), 1524–1543. https://doi.org/10.1080/01436597.2021.1896965
Yılmaz, Z. & Turner, B. S. (2019). “Turkey’s deepening authoritarianism and the fall of electoral democracy.” British Journal of Middle Eastern Studies, 46(5), 691–698. https://doi.org/10.1080/13530194.2019.1642662
Pretorius, Christo. (2024). “EU Employment Law and the AI Act: A Policy Brief Putting the Human Back in ‘Human-Centric’ Policy.” Policy Papers. European Center for Populism Studies (ECPS). September 11, 2024.https://doi.org/10.55271/pop0002
This policy paper analyzes the European Union’s (EU) AI Act, aimed at regulating Artificial Intelligence (AI) through four risk classifications related to data protection, privacy, security, and fundamental rights. While the Act establishes regulatory frameworks, it neglects employment security, a critical factor behind public mistrust of AI. The paper warns that failure to address this issue could deepen socio-economic inequalities and lead to political unrest. Recommendations include promoting collective negotiation between workers and employers, advocating for legislation on redundancies linked to AI, and launching information campaigns to educate workers, thus ensuring fair working conditions and improving trust in AI technology.
The European Union (EU) is attempting to regulate the deployment of Artificial Intelligence (AI) through the recently passed AI Act. Overall, the Act outlines four distinct classifications for AI systems, categorizing them on the risk they pose on an individual’s data protection, privacy, security, and fundamental rights. It further provides regulations and guidance to member states on each category and calls for the establishment of national and EU level regulatory bodies to enforce the Act. However, ultimately the Act overlooks the critical issue of employment security, which is the main cause behind mistrust of AI. This gap could exacerbate socio-economic inequalities and fuel political unrest in the short to long term if it is not addressed promptly.
Research indicates that AI will have a disruptive effect on employment overall as certain types of work is automated and augmented, but the effects of this will be felt most in clerical, secretarial, and para-professional roles, which poses a risk to vulnerable groups including women and those with lower educational attainment. There is a pressing need for proactive measures to mitigate the harmful effects of the technology’s implementation on workers and their families, specifically the protection against unjustified dismissal and the assurance of fair working conditions. The following recommendations are proposed to address the Act’s shortcomings:
Collective Negotiation: Encourage cooperation between workers, employers, and worker associations to assess AI’s impact on jobs. This could lead to agreements on redeployment, education opportunities, or redundancy notices, providing workers with clearer timelines and reducing workplace disruption.
Advocacy for New Legislation: Push for legislation that mandates notice periods for redundancies due to technological innovation, building on the EU Directive on Transparent and Predictable Working Conditions. The International Labor Organization’s 1982 Termination of Employment Convention offers a valuable template for such legislation.
Information Campaigns: Launch campaigns to educate workers on AI systems, their potential benefits, and available upskilling opportunities. These efforts would enhance trust in AI, aligning with the AI Act’s goal of ensuring human-centric, safe, and lawful AI deployment.
Context: The Problem with the AI Act
The discussion about AI regulation in the European Union (EU) began with Ursula von der Leyen’s 2019-2024 agenda for Europe, A Union that Strives for More (2019). It stated that she would ‘put forward legislation for a coordinated European approach on the human and ethical implications of Artificial Intelligence’ (von der Leyen, 2019: 13). What followed was a call for greater focus on enabling more investment in, and the better coordination of the development and deployment of AI in the EU, alongside a call for a clear definition of what high-risk AI systems are (General Secretariat of the Council to Delegations, 2020). Jointly, the Coordinated Plan on Artificial Intelligence was published to help foster trust in AI systems yet failed to address the real consideration that such a disruptive technology would have on the world of work (European Commission, 2021). The purpose of this policy brief is to advocate for greater attention to be given to the area of employment law, so that action may be taken to ease the concerns over job loss relating to AI, and thus, foster greater trust in this new technology.
A report from the International Labor Organization estimated that the introduction of AI systems would have an overall disruptive effect worldwide, highlight that an important share of clerical, secretarial, and para-professional jobs would be most affected (Gmyrek et al., 2023). These findings are supported by similar ones from PricewaterhouseCoopers International Limited (PWC), the US-based National Bureau of Economic Research, and the Pew Research Centre, which also found that women and individuals with lower levels of education are most vulnerable to job loss due to automation (Hawksworth, Berriman, & Goel, 2018; Acemoglu, & Restrepo, 2021; Kochhar, 2023). Although this is an evolving issue, as AI indeed has the potential to increase economic growth and improve lives, the disruptive impact of AI systems on the world of work has yet to be felt. Therefore, it is important to take the necessary steps now to mitigate the harmful affect this technology can have on workers and their families and provide a safety net to individuals during this period of transition.
Currently the EU’s AI Act states that it is trying to ensure ‘a high level of protection of health, safety, fundamental rights as enshrined in the Charter of Fundamental Rights (2012) of the European Union (hereby referred to as the ‘Charter’), including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI systems in the Union…’. However, one of the fundamental rights not addressed was that of Articles 30 (Protection in the event of unjustified dismissal) and 31 (Fair and just working conditions) of the Charter (European Union, 2012). The focus on adopting AI throughout the EU, whilst highlighting the issues of data protection, privacy, and security of individuals in the long term, has left the area of employment concerns unaddressed.
The EU has highlighted that this area is one that should be legislated on in the future, stating in their 2022 report on artificial intelligence in the digital age: ‘[The European Parliament] emphasises that the use of AI in this area gives rise to a number of ethical, legal and employment related challenges… [and] stresses that AI is currently already substituting or complementing humans in a subset of tasks but that it is not yet having detectable significant aggregate labour market consequences’ (European Parliament, 2022). The report also highlights that strong links between AI and the rising socio-economic inequality has been found, and researchers warn that unregulated AI will further increase the wealth gap within society, a finding supported by other academic publications (Rotman, 2022; Bushwick, 2023). MEP Brando Benifei indicates that a proposal for a directive on AI at the workplace is something to be discussed in the future, but at present there is not much information from Benifei or anyone else on what policy discussions in this area will look like (Publyon, 2024). This uncertainty continues following von der Leyen’s announcement that more research investments will be made into AI, leaving commentators to speculate what will actually be funded (Wold, 2024). Much like the events of 2008, people that feel left behind by the increasing automatization and augmentation of the workplace could be persuaded into more extreme populist politics, which is why delaying discussions on this topic are problematic (Steiner et al., 2023). While the EU continues to refine policy, the implementation of AI into the workplace is happening now, and the situation for workers could change rapidly as new technology hits the market. Just as the EU attempted to get ahead of AI by defining it, they should attempt to get ahead of employment concerns before they evolve past being concerns alone.
The AI Act in Brief
To understand the shortcomings of the AI Act (2024), it is helpful to summarize the contents of the regulation first. The European Union’s AI Act came into force on the 13th of June 2024, seeking to create rules that would govern the development, employment, and use of artificial intelligence (AI) systems within the Union. The rules define four levels of risk regarding AI systems, offering only a description of what each category is, and indicate what systems are I place to regulate them within the common market:
Unacceptable Risk: These AI are banned within the EU, except in limited circumstances.
[Article 5.1(a/b)] Manipulative AI that can deceive, subvert, or impair autonomy, decision-making and free choice. This includes AI that may exploit disadvantaged persons whether through socio-economic vulnerabilities or take advantage of a disability. Notable exceptions to this are AI used in the context of medical treatment such as psychological treatment of a mental disease or physical rehabilitation, or within advertisement.
[Article 5.1(c)] AI systems that provide social scoring as it may lead to discrimination and exclusion.
[Article 5.1(d)] Risk assessment or predictive AI systems in the context of law enforcement.
[Article 5.1(e)] AI systems that scrape footage to expand facial recognition databases.
[Article 5.1(g/h)] Biometric categorization systems that are based on natural persons’ biometric data, such as an individual person’s face or fingerprint, to deduce or infer an individuals’ political opinions, trade union membership, religious or philosophical beliefs, race, sex life or sexual orientation. The notable exception to this rule is biometric categorization systems employed by law enforcement agencies for anti-terrorism and missing persons.
High Risk: [Preamble (paragraph 48)] Thesesystems are defined as having a negative effect on safety or the following fundamental rights, the latter of which is in this case a person’s access to education, employment, public or private services, legal representation, and administrative or democratic processes:
The protection of personal data,
The rights of persons with disabilities,
Freedom of expression and information,
Gender equality,
Freedom of assembly and of association,
Intellectual property rights,
The right to non-discrimination,
Workers’ rights,
The right to an effective remedy and a fair trial,
The right to education,
Consumer protection,
The right to good administration
The right of defense and the presumption of innocence,
High risk systems are also classified as that related to critical infrastructure and biometric identification systems.
Limited Risk: [Preamble (paragraph 53)] AI systems that do not materially influence the outcome of decision-making, and/or augment tasks that are either automated or conducted by humans are considered to be limited risk. This category is highlighted as needing further guidelines in the future.
Minimal Risk: Most AI systems currently available fall under this category – they provide solutions with minimal risk, and are therefore not regulated, nor will be moving forward.
The regulation achieved most of the mandated aims, creating clear definitions for different levels of risk, whilst focusing on the impact different AI systems could have on individuals.
Recommendations
The need to deal with the issue of job loss and employment law concerns are real and present and must be addressed in a timely manner during these early stages of AI implementation. Although further training opportunities is an avenue that the EU is pursuing, to contribute to the Commission’s call for the development of an ecosystem of trust by proposing a legal framework for trustworthy AI, this paper proposes three different avenues the issue of employment security could be addressed:
Collective Negotiation: Close cooperation betweenworker associations, the groups represented by them, and employers, can allow for investigation on the potential disruptive impact that AI systems can have on various professions. This will allow them to make informed decisions so that they may take steps to reach collective agreement that would allow for either redeployment of workers, advertise or make available further education opportunities, or have a guaranteed period of redundancy notice with regards to the implementation of AI systems. Similarly, management could make available to workers a clear AI implementation plan so that workplace disruption is reduced, and workers know in advance the time they have should their work be made redundant.
Advocacy For the Adaption/Adopting of New Legislation: If no provisions are currently in place, countries within the EU must take active steps to create or adapt legislation that will allow workers to have a notice period that they will be made redundant due to technological innovation. The EU Directive on Transparent and Predictable Working Conditions in the European Union, 2019, set a precedent that the EU was willing to use its competency in the area of social rights to address new forms of employment to protect EU workers from unpredictable employment (Official Journal of the European Union, 2019).
Given the close relationship between the International Labor Organization (ILO) and the EU, the ILO’s 1982 -Termination of Employment Convention (C158) would provide a good basis for text that could be incorporated into the EU Directive on Transparent and Predictable Working Conditions in the European Union (Pretorius, 2023). Only nine nations have ratified the convention, which contains the following article that could either be incorporated into EU law, or provide an example that can be followed:
‘[Article 13.1.] When the employer contemplates terminations for reasons of an economic, technological, structural or similar nature, the employer shall:
(a) provide the workers’ representatives concerned in good time with relevant information including the reasons for the terminations contemplated, the number and categories of workers likely to be affected and the period over which the terminations are intended to be carried out;
(b) give, in accordance with national law and practice, the workers’ representatives concerned, as early as possible, an opportunity for consultation on measures to be taken to avert or to minimise the terminations and measures to mitigate the adverse effects of any terminations on the workers concerned such as finding alternative employment’ (ILO, 1982).
Information Campaigns: At the moment, the uncertainty surrounding AI systems in the workplace is fuelling distrust in the technology (Chakravorti, 2024). Campaigns that inform workers not only of the capabilities of implemented AI systems and how to utilize their potential, but also about opportunities to upskill, are essential moving forward. Regardless of how these campaigns are run, giving workers more accessible information would go a long way towards realizing the ‘human centric’ ideals of the AI Act – ‘so that people can trust that the technology is used in a way that is safe and compliant with the law, including the respect of fundamental rights’ (European Commission (2021).
(*) Christo Pretorius graduated with a MSc in International Public Policy and Diplomacy from University College Cork and was the first student to receive a postgraduate “Student of the Year” award from the Department of Government. His dissertation was published and acquired by the Bar of Ireland’s Law Library and has gone on to support Irish policy makers. Stemming from his undergraduate in Ancient and Medieval History and Culture from Trinity College Dublin, his research interests include the mechanisms for authoritarian power and control, and democratic backsliding, particularly when viewed with a historical lens.
— (2021). Document 52021PC0206: Proposal for a Regulation of The European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts (COM(2021) 206 final). European Commission. https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX:52021PC0206 (accessed on August 17, 2024).
— (2024). Corrigendum to Regulation (EU) 2024/… of The European Parliament and of the Council of laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (cor01). European Parliament. https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138-FNL-COR01_EN.pdf
Acemoglu D. & Restrepo P. (2021). Tasks, Automation, and the Rise in US Wage Inequality (NBER Working Paper 28920). National Bureau of Economic Research. https://doi.org/10.3386/w28920
Al Naam, Y.A.; Elsafi, S.; Al Jahdali, M.H.; Al Shaman, R.S.; Al-Qurouni, B.H. & Al Zahrani E.M. (2022). “The Impact of Total Automaton on the Clinical Laboratory Workforce: A Case Study.” Journal of Healthcare Leadership. 14, pp. 55-62. https://doi.org/10.2147/JHL.S362614
Gmyrek P., Berg J., & Bescond D. (2023). Generative AI and jobs: A global analysis of potential effects on job quantity and quality (ILO Working Paper 96). International Labour Organization. https://doi.org/10.54394/FHEM8239
Jensen, C.L.; Thomsen, L.K.; Zeuthen, M.; Johnsen, S.; Jashi, R.E.; Nielsen, M.F.B.; Hemstra, L.E. & Smith, J. (2024) “Biomedical laboratory scientists and technicians in digital pathology – Is there a need for professional development?” Digital Health. 10. https://doi.org/10.1177/20552076241237392
Pretorius, P.C. (2023) “Can Irish Industrial Relations Still be Called ‘Voluntarist’? An investigation into Irish Employment Law”. Irish Employment Law Journal, 20(1), pp. 11-21.
Please cite as: Yilmaz, Ihsan & Kenes, Bulent. (2023). “Digital Authoritarianism in Turkish Cyberspace: A Study of Deception and Disinformation by the AKP Regime’s AKtrolls and Akbots.” Populism & Politics (P&P). European Center for Populism Studies (ECPS). November 13, 2023. https://doi.org/10.55271/pp0026
Abstract
This article explores the evolving landscape of digital authoritarianism in Turkish cyberspace, focusing on the deceptive strategies employed by the AKP regime through AKtrolls, AKbots and hackers. Initially employing censorship and content filtering, the government has progressively embraced sophisticated methods, including the weaponization of legislation and regulatory bodies to curtail online freedoms. In the third generation of information controls, a sovereign national cyber-zone marked by extensive surveillance practices has emerged. Targeted persecution of critical netizens, coupled with (dis)information campaigns, shapes the digital narrative. Central to this is the extensive use of internet bots, orchestrated campaigns, and AKtrolls for political manipulation, amplifying government propaganda and suppressing dissenting voices. As Turkey navigates a complex online landscape, the study contributes insights into the multifaceted tactics of Erdogan regime’s digital authoritarianism.
Since the last decade, authoritarian governments have co-opted social media, compromising its potential for promoting individual liberties (Yilmaz and Yang, 2023). In recent years, President Recep Tayyip Erdogan-led Turkish government has staunchly endeavoured to control online platforms and manipulate digital spaces to consolidate power, stifle dissent, and shape public opinion. Given the large online user base and the declining influence of traditional media, the internet has become a crucial platform for opposition voices. In response, President Erdogan’s “authoritarian Islamist populist regime” (Yilmaz and Bashirov, 2018) has implemented various measures to regulate and monitor the digital space to suppress dissent (Bellut, 2021).
Turkey’s domestic internet policy under the Erdogan regime has shown a convergence towards information control practices observed in countries like Russia and China, despite Turkey’s nominal compliance with Euro-Atlantic norms on cyber-security (Eldem, 2020). This convergence is characterized by increasing efforts to establish “digital sovereignty” and prioritize information security, often serving as a pretext for content control and internet censorship (Eldem, 2020). The Erdogan regime takes a neo-Hobbesian view of cyberspace and seeks to exert sovereignty in this realm through various information controls (Eldem, 2020). Under the Erdogan regime, there has been an increase in the surveillance of online activities, leveraging the surveillance and repression tools provided by social media and digital technologies. Once the regime established its hegemony over the state, it expanded its surveillance tactics to govern society.
In Turkey, a combination of actors including riot police, social media monitoring agents, intelligence officers, pro-government trolls, hackers, secret witnesses, informants, and collaborators work together to identify and target individuals deemed “risky.” This surveillance apparatus follows the hierarchical structure of the Turkish authoritarian state, with President Erdogan overseeing its developments (Topak, 2019).
The article examines the Turkish government’s pervasive use of trolls, internet bots, orchestrated campaigns, and transnational manipulations that have shaped the country’s online environment. Social media platforms, especially Twitter, are central to these manipulation efforts in Turkey. While Twitter has taken action against thousands of accounts associated with the ruling party’s youth wing, the resistance from the government highlights the significance of these online campaigns.
The use of fake accounts, compromised profiles, and silent bots further deepens the complexities of digital authoritarianism in Turkey. These accounts serve as vehicles for spreading disinformation, astroturfing, and manipulating social media trends. While efforts have been made to identify and remove such accounts, the adaptability of these manipulative actors poses a significant challenge. Many of these bots remain dormant for extended periods, resurfacing strategically to create and promote fake trends while evading conventional detection methods (Elmas, 2023). These software applications play a pivotal role in amplifying government propaganda, countering opposition discourse, and creating an illusion of widespread support. From replicating messages to retweeting content across hundreds of accounts, these automated bots have become instrumental in shaping online narratives and suppressing dissenting voices (Yesil et al., 2017; Eldem, 2023).
Digital Authoritarianism and Information Controls
The Erdogan regime appointed trustee to Zaman daily in Istanbul, Turkey on March 4, 2016. Photo: Shutterstock.
Digital authoritarianism is extensive utilization of information control measures by authoritarian regimes to shape and influence the online experiences and behaviors of the public (Howells and Henry, 2021). These regimes have adeptly adapted to the mechanisms of internet governance by exploiting the vast reach of new media platforms. They employ various forms of censorship, both overt and covert, to suppress dissent and control the dissemination of information.
The literature on digital authoritarianism extensively explores how China has effectively utilized digital technology to maintain and strengthen its rule (Polyakova & Meserole, 2019; Dragu & Lupu, 2021; Sherman, 2021). While China relies on sophisticated surveillance systems and targeted persecution of individuals, the people of Russia experience the impact of digital authoritarianism through internet censorship, manipulation of information flow, the spread of disinformation, and the mobilization of trolls and automated bots (Yilmaz, 2023; Timucin, 2021).
In the realm of digital authoritarianism, disinformation has become a favored tool (Diamond, 2021; Tucker et al., 2017). Authoritarian regimes obscure information, engage in deception, and manipulate the context to shape public opinion (Bimber and de Zúñiga, 2020). It is important to note that digital authoritarianism is not a uniform strategy; different regimes adopt various approaches. Some directly restrict access to the internet, while others rely on heavy censorship and disinformation campaigns (Timucin, 2021; Polyakova & Meserole, 2019).
The Russian model of digital authoritarianism operates with subtlety. Manipulating social media networks is easier to accomplish and maintain compared to comprehensive monitoring systems (Timucin, 2021). In these cases, the open nature of social media becomes a double-edged sword, enabling the widespread distribution of both accurate information and misinformation while amplifying voices from various ends of the political spectrum (Brown et al., 2012).
Digital Authoritarianism and Information Controls in Turkey
During the third term of the AKP (Justice and Development Party) in 2011, Turkey witnessed a shift towards increasing populist authoritarianism. Since then, the dissidents and critics of the AKP government have been framed and demonised as the enemies of the Turkish people (Yilmaz and Bashirov, 2018).
Initially, the government targeted conventional media outlets, subjecting them to various tactics employed by President Erdogan (Yanardagoglu, 2018). Many critical media organizations were forced out of business, and their assets were taken over by pro-government entities. The persecutions both preceding and after the state of emergency in 2016 heightened, leading to the confiscation of media groups like the Gulen-linked Samanyolu Group, Koza Ipek Group, and Feza Publications (Timucin, 2021; BBC 2016). These actions effectively created a clientelist relationship between the government and the media, as anti-government entities were closed and transferred or sold to pro-government supporters (Yilmaz and Bashirov, 2018).
The government’s dominance over traditional media outlets served as the foundation for Erdogan’s digital authoritarianism, granting the government control over the “formal” form of digital media (Timucin, 2021). Faced with limitations in conventional media, the public turned to online sites, alternative media, and social media platforms in search of reliable news and information.
The Gezi Park protests in 2013 marked a significant moment in Turkey’s social movements and the role of social media activism. These protests initially started as a peaceful sit-in at Gezi Park to oppose the demolition of trees for a shopping mall construction but quickly escalated into one of the largest civil unrests in Turkey’s recent history. During the early days of the protests, traditional media outlets did not provide adequate coverage, leading people to seek alternative sources of information. Social media platforms played a crucial role as a source of news, organization, and political expression, particularly among urban, tech-savvy youth (Yesil et al., 2017). The number of Twitter users in Turkey skyrocketed from an estimated 2 million to 12 million during the protests (Ozturk, 2013; Varnalı and Görgülü, 2015). Social media allowed for a more decentralized and inclusive form of communication during the protests, as it facilitated the rapid dissemination of information and bypassed traditional media gatekeepers (O’Donohue et al., 2020).
The corruption scandal in December 2013 was another event where social media played a crucial role in shaping public opinion and disseminating information. Government opponents utilized social media platforms to share incriminating evidence of corruption involving President Erdogan, his party, and his cabinet. In response, the ruling AKP adopted a heavy-handed approach, detaining Twitter users and implementing bans on platforms such as Twitter and YouTube. The government positioned social media as a threat to Turkey’s national unity, state sovereignty, social cohesion, and moral values (Yesil et al., 2017; Kocer, 2015).
In recent years, Turkey has made efforts to assert control over social media platforms and internet service providers. In 2020, a “disinformation law” was introduced, pressuring these entities to remove “disinformation” from online platforms. Proposed changes to Article 19 in 2022 aim to enhance control over the cyber space, granting more powers to the Information and Communication Technologies Authority (BTK) to regulate the internet. These developments indicate Turkey’s increasing efforts to curb the flow of information, maintain a favorable narrative, and suppress dissenting voices, potentially impacting freedom of expression and the right to access information in the country.
The increasing level of digital governance in Turkey has manifested in various forms, leading to significant consequences. Content regulation has played a crucial role in the government’s efforts to control the internet. Bodies such as BTK have been granted the power to block access to online content deemed threatening. This has created a climate of increased pressure on internet service providers to comply with the state’s requests regarding content removal and access to personal user data. Failure to adhere to these obligations can result in penalties or even the revocation of licenses. There are also speculations that service providers may face bandwidth reduction and limitations on advertisements as a means of exerting further control.
Furthermore, cybercrime provisions intended to safeguard against hacking and online harassment have been instrumentalized by the state to gather user information for investigation, prosecution, and cooperation with “international entities.” Individuals found guilty of online offenses can be brought to court and punished under specific articles of the Turkish Penal Code.
In summary, the government introduced legal restrictions, content removal requests, website and social media platform shutdowns, prosecution of internet users, state surveillance, and disinformation campaigns. These measures have resulted in a significant decline in internet freedom and the rise of digital authoritarianism in Turkey between 2013 and the controversial coup attempt in July 2016.
Technical Instruments and Surveillance Methods to Monitor and Control Cyberspace
The Erdogan regime has employed various technical instruments and surveillance methods to monitor and control online activities. Reports indicate that Western companies provided spyware tools to Turkish security agencies, which have been in use since at least 2012. These tools include Deep Packet Inspection (DPI) technology, enabling surveillance of online communications, blocking of online content, and redirecting users to download spyware-infected versions of software like Skype and Avast. Additionally, the Remote-Control System and FinFisher spyware programs are used for extracting emails, files, passwords, and controlling audio and video recording systems on targeted devices (Privacy International, 2014; Yesil et al., 2017; CitizenLab, 2018; AccessNow, 2018).
The Erdogan regime also established a “Social Media Monitoring Unit,” a specialized police force responsible for monitoring citizens’ social media posts. There is also a group known as AKtrolls, who can act as informants and report social media posts of targeted users to security agencies, potentially leading to arrests. The AKP has also formed a team of “white hat” hackers, ostensibly for enhancing Turkey’s cyber-defense. Furthermore, civilian informants have been mobilized for internet surveillance, with ordinary citizens encouraged to spy on each other online, creating a culture of “online snitching” (Yesil et al., 2017). This pervasive surveillance approach, utilizing both software and social-user-based surveillance, creates a climate of self-censorship and vigilance among users (Saka, 2021; Morozov, 2012).
The National Intelligence Organization of Turkey (MİT) has been granted extended surveillance powers, both online and offline, following the post-Gezi Park protests. Law No. 6532 allowed MİT to collect private data and information about individuals without a court order from various entities. The law also granted legal immunity to MİT personnel and criminalized the publication and broadcasting of leaked intelligence information. MİT operates within the authoritarian state’s chain of command. Given MİT’s lack of autonomy, it is highly likely that the Erdogan regime exploits the agency’s expanded powers for unwarranted surveillance, political witch hunts of dissidents, journalists, and even ordinary online users, aiming to suppress any online criticism (Yeşil, 2016).
In October 2015, the AKP implemented the “Rewards Regulation,” which offered monetary rewards to informants who assisted security agencies in the arrest of alleged terror suspects. This measure encouraged journalists, NGOs, and citizens to monitor online communications and report dissenting individuals (Zagidullin et al., 2021).
The Turkish police introduced a smartphone app and a dedicated webpage that allowed citizens to report social media posts they deemed as terrorist propaganda. The main opposition party claimed that the police prepared summaries of proceedings for 17,000 social media users, and they were attempting to locate the addresses of 45,000 others (Eldem, 2023). Consequently, the state of emergency (SoE) decrees following controversial coup attempt in 2016 further tightened the government’s control over the internet. Decree 670 granted “all relevant authorities” access to all forms of information, digital or otherwise, about alleged coup suspects and their families. Decree 671 empowered the government to take any necessary measures regarding digital communications provided by ISPs, data centers, and other relevant private entities in the name of national security and public order. Finally, Decree 680 expanded police powers to investigate cybercrime by requiring ISPs to share personal information with the police without a court order (Topak, 2019; Yesil et al., 2017; Eldem, 2023).
Prior to Turkey’s presidential and parliamentary elections in 2023, Turkish prosecutors initiated investigations into social media users accused of spreading disinformation aiming to create fear, panic, and turmoil in society. The Ankara Chief Public Prosecutor’s Office launched an investigation into the Twitter account holders who allegedly collaborated to spread disinformation, potentially reaching around 40 million social media users (Turkish Minute, 2023).
The Erdogan regime has significantly expanded its online censorship toolkit through legislative amendments passed in October 2022 (HRW, 2023). As an example of the restrictions imposed, on May 14, 2023, Twitter announced that it was restricting access to certain account holders in Turkey to ensure the platform remains available to the people of Turkey.
AKtrolls
The Erdogan regime responded to critical voices on social media during the Gezi Protests by employing political trolls. This strategy of political trolling, whether carried out by humans or algorithms, is closely associated with Russia and has been adopted by AKP’s trolls, known as AKtrolls, who exhibit similarities to Kremlin-operated networks. The deep integration of political trolling within the political system and mainstream media in Turkey has been highlighted in a study by Karatas and Saka (2017). These trolling practices are facilitated through the collaboration of political institutions and media outlets. Trolls act as precursors, disseminating propaganda and testing public opinion before mainstream political figures introduce favored populist policies and narratives.
The AKP’s troll army was initially established by the vice-chairman of the AKP and primarily consisted of members from AKP youth organizations. Over time, it has grown into an organization of 6,000 individuals, with 30 core members responsible for setting trending hashtags that other members then promote. Many of these trolls are graduates of pro-AKP Imam Hatip schools. It is worth noting that these trolls receive financial compensation, and there are indications that pro-AKP networks provide additional benefits to successful trolls, including entities like TRT (Turkish Radio and Television) and mobile phone operator Turkcell.
The first network map of AKtrolls was provided by Hafiza Kolektifi, a research collective based in Ankara, in October 2015. This map revealed the close connections among 113 Twitter accounts, including not only ordinary trolls but also politicians, advisors to President Erdogan, and pro-government journalists. The map was created based on the analysis of a popular and aggressive troll named @esatreis, who was identified as a youth member of the AKP. By monitoring the users followed by @esatreis using the Twitter Application Programming Interface (API) and conducting in-depth network analysis, two distinct groups were identified. The first group consisted of politicians, Erdogan’s advisors, and pro-government journalists, while the second group comprised anonymous trolls using pseudonyms. The study demonstrated that @esatreis acted as a bridge between the troll group and the politicians/journalists, with Mustafa Varank, an advisor to Erdogan and currently the Minister of Industry and Technology, serving as a central connection node between these two groups (Karatas & Saka, 2017).
It was revealed that politicians and state officials maintained their own anonymous troll accounts, in addition to their official ones. Instances have surfaced where AKP officials were caught promoting themselves through fake accounts. For instance, Minister of the Environment and Urbanization Mehmet Ozhaseki and AKP’s Bursa Mayor Recep Altepe were exposed for sharing supportive tweets mentioning themselves mistakenly from their official accounts instead of their fake ones. Another case involved AKP deputy Ahmet Hamdi Çamlı, who inadvertently opened his front camera while live-streaming parliamentary discussions with a fake account using a female name (@YelizAdeley) and a teenager’s profile photo. Within the AKP, different trolls seem to specialize in specific subjects aligned with the party’s policies and strategies. For example, accounts such as @WakeUpAttack and @UstAkilOyunlari fabricate conspiracy theories related to international affairs, while @AKKulis shares tweets from state officials and provides updates on AKP’s latest news and activities. Another troll account, @Baskentci, shared lists of journalists to be detained and media outlets to be shut down, as well as advanced information on post-coup attempt decisions (Tartanoglu, 2016).
AKP trolls specifically target and disrupt social media users who express opposition to the ruling party, openly identifying themselves as its supporters. While they are known within party circles, they remain anonymous to outsiders. However, some trolls, driven by rewards and recognition within their social networks, choose not to conceal their identities. In fact, Sözeri (2016) describes how certain pro-government journalists themselves act as political trolls and even lead the attacks. It is important to note that political trolls are not necessarily anonymous or isolated individuals. When aligned with a ruling party led by a president with increased powers, many trolls shed their anonymity, and some even threaten legal action when called out as trolls (Saka, 2021). Realizing that such tactics were not improving the AKP’s popularity, the party changed its approach just before the 2015 general elections by establishing the New Turkey Digital Office, which focused on more conventional forms of online propaganda (Benedictus, 2016).
The proliferation of digital disinformation coordinated networks of fake accounts, and the deployment of political trolls have had a significant impact on online discourse in Turkey, hindering the free expression of critical voices and fostering an environment of manipulation and propaganda. Much like the Russian “web brigades,” which consist of hundreds of thousands of paid users who post positive comments about the Putin administration, Erdogan regime also recruited an “army of trolls” to reinforce the declining hegemony of the ruling party shortly after the Gezi Park protests in 2013 (Bulut & Yoruk, 2017). Their objective is to discredit, intimidate, and suppress critical voices, often resorting to labelling journalists and celebrities as “traitors,” “terrorists,” “supporters of terrorism,” and “infidels.” Consequently, Twitter has transformed into a medium of government-led populist polarization, misinformation, and online attacks since the Gezi protests (Bulut & Yoruk, 2017). The situation worsened after the events of 2016, exposing critical voices to open cyberbullying by trolls and intensifying their persecution (Saka, 2021).
One prevalent form of political trolling is the deliberate disruption of influential voices on Twitter who contribute to politically critical hashtags or share news related to potential emergencies. Trolls and hackers primarily target professional journalists, opposition politicians, activists, and members of opposition parties. AKtrolls repeatedly attack and disturb these individuals using offensive and abusive language, labelling them as terrorists or traitors, intimidating them, and even threatening arrest. However, ordinary citizens who participate on Twitter with non-anonymous profiles are also vulnerable targets for AKtrolls. Being targeted by trolls often leads to individuals quitting social media, practicing self-censorship, and ultimately participating less in public debates (Karatas & Saka, 2017).
AKtrolls specifically target critical voices that share undesirable content or use specific hashtags. They employ tactics such as posting tweets with humiliating, intimidating, and sexually abusive insults. Doxxing, the act of revealing personal and private information about individuals, including their home addresses and phone numbers, is also a common strategy employed by AKtrolls. In some cases, AKtrolls may have connections to the security forces, particularly the police. Additionally, hacking and leaking private direct messages have been popular tactics used to discredit opposing voices on Twitter. Pro-AKP hackers affiliated with the AKtrolls have targeted numerous journalists. The initial stage often involves hacking into the journalist’s Twitter account and posting tweets that apologize to Erdogan for criticism or betrayal. Furthermore, AKtrolls frequently engage in collective reporting to Twitter in an attempt to suspend or block targeted Twitter handles (Karatas & Saka, 2017).
A significant event within the ruling AKP was the forced resignation of then-Prime Minister Ahmet Davutoglu by Erdogan. Prior to his resignation, an anonymous WordPress blog titled the “Pelikan Declaration” emerged, accusing Davutoglu of attempting to bypass Erdogan’s authority and making various allegations against him. This declaration was widely circulated by a group of AKtrolls who later became known as the “Pelikan Group.” It is worth noting that this group had close ties to a media conglomerate managed by the Albayrak Family, particularly Berat Albayrak, Erdogan’s son-in-law and Turkey’s former Minister of Economy, as well as his elder brother and media mogul Serhat Albayrak (Saka, 2021).
AKbots
The Erdogan regime extensively utilizes internet bots, which are software applications running automated tasks over the Internet, to support paid AKtrolls (Yesil et al., 2017). Researchers have demonstrated that during the aftermath of the Ankara bombings in October 2015, the heavy use of automated bots played a crucial role in countering anti-AKP discourse. Twitter even took action to ban a bot-powered hashtag that praised President Erdogan, leading Turkish ministers to claim a global conspiracy against Erdogan (Hurriyet Daily News, 2016; Lapowsky, 2015).
The use of automated bots differs from having multiple accounts in terms of scale. The presence of bots becomes noticeable when a message is replicated or retweeted to more than a few hundred other accounts. It is worth noting that as of November 2016, Istanbul and Ankara ranked as the top two cities for AKbot usage, according to the major internet security company Norton (Paganini, 2016; Yesil et al., 2017; Eldem, 2020).
Furthermore, DFRLab(2018) has revealed that many tactics, including doxing (revealing personal information), are employed through cross-platform coordination. It is important to recognize that in the Turkish context, the influence of AKtrolls extends beyond internet platforms and involves close cooperation with conventional media outlets under Erdogan’s control (Saka, 2021). In October 2019, DFRLab identified a network of inauthentic accounts that aimed to mobilize domestic support for the Turkish government’s fight against the Kurdish People’s Protection Units (YPG) in Syria (Grossman et al., 2020). This network involved fabricated personalities created on the same day with similar usernames, several pro-AKP retweet rings, and centrally managed compromised accounts that were utilized for AKP propaganda. The tweets originating from these accounts criticized the pro-Kurdish HDP, accusing it of terrorism and employing social media manipulation. The tweets also targeted the main opposition party, CHP.
Additionally, the accounts promoted the 2017 Turkish constitutional referendum, which consolidated power in Erdogan, and sought to increase domestic support for Turkish intervention in Syria. Some English-language tweets attempted to bolster the international legitimacy of Turkey’s offensive in October 2019, praising Turkey for accepting Syrian refugees and criticizing the refugee policies of several Western nations. The dataset of accounts included individuals who appeared to be leaders of local AKP branches, members of digital marketing firms, sports fans, as well as clearly fabricated personalities or members of retweet rings (Grossman et al., 2020).
In 2019, a significant proportion of the daily top ten Twitter trends in Turkey were generated by fake accounts or bots, averaging 26.7 percent. The impact was even higher for the top five Twitter trends, reaching 47.5 percent (Elmas, 2023). State-organized hate speech, trolls, and online harassment often go unchecked (Briar, 2020).
In 2020, Twitter took action to remove over 7,000 accounts associated with the youth wing of the ruling AKP. These accounts were responsible for generating more than 37 million tweets, which aimed to create a false perception of grassroots support for government policies, promote AKP perspectives, and criticize its opponents. Many of these accounts were found to be fake, while others belonged to real individuals whose accounts had been compromised and controlled by AKP supporters. Fahrettin Altun, Erdogan’s communications director, issued threats against Twitter for removing this large network of government-aligned fake and compromised accounts (Twitter Safety, 2020; HRW, 2023a).
A study published in the ACM Web Conference 2023 identified Turkey as one of the most active countries for bot networks on Twitter. These networks were found to be pushing political slogans as part of a manipulation campaign leading up to the 2023 elections. Alongside the reactivated bots, the main opposition presidential candidate, Kilicdaroglu, warned about the circulation of algorithmically fabricated audio or video clips aimed at discrediting him (Karatas & Saka, 2017).
Bots on social media engage in malicious activities such as amplifying harmful narratives, spreading disinformation, and astroturfing. Elmas (2023) detected over 212,000 such bots on Twitter targeting Turkish trends, referring to them as “astrobots.” Twitter has purged these bots en masse six times since June 2018. According to Elmas’ study, the percentage of fake trends on Twitter varied over time. Between January 2021 and November 2021, the average daily percentage of fake trends was 30 percent. After Twitter purged bots around November 2021, the share of fake trends decreased to 10 percent in March 2022. However, it started to rise again and reached 20 percent by November 2022. As of April 7, 2023, just before the 2023 Turkish election, the attacks continued, and the percentage of fake trends fluctuated between 35 percent and 9 percent (on weekends). Notably, many bots in the dataset were silent, meaning they did not actively post tweets. Instead, they were used to create fake trends by posting tweets promoting a trend and immediately deleting them. This silent behaviour makes it challenging for bot detection methods to identify them, with 87 percent of the bot accounts remaining silent for at least one month (Elmas, 2023).
In May 2023, during the election month, Turkey saw 145 million tweets shared from 12,479,000 accounts, with 23 percent of these identified as bot accounts by the Turkish General Directorate of Security. An examination of the top 10 trending hashtags revealed that 52 percent of accounts using these hashtags were bot accounts (Bulur, 2022). It was also reported that approximately 12,000 Russian- and Hungarian-speaking Twitter accounts had been reactivated, along with reactivated Turkish-speaking accounts, accompanied by numerous bot followers to amplify their posts. Although only 27 percent of the Turkish population is believed to use Twitter, the impact is significant, with 20 percent of the trending topics on Turkish Twitter in 2023 being manipulated and not reflective of public discourse. A dataset covering the period from 2013 to 2023 indicated that 20 to 50 percent of trending topics in Turkey were fake and primarily propelled by bots (Soylu, 2023, Unker, 2023).
Hackers
Photo: Shutterstock.
The Erdogan regime’s extensive investments in domestic and global information operations, include the recruitment of hackers worldwide. The regime has also established a “white hat” hacker team ostensibly for enhancing Turkey’s cyber-defense (Yeşil et al., 2017). However, there are suspicions that this team has been utilized offensively to silence government critics (Cimpanu, 2016).
The private Cihan News Agency, known for its accurate and swift reporting of Turkish election results since the 1990s, faced a significant cyberattack for the first time during the local elections on March 30, 2014, raising concerns about election security (Haber Turk, 2014). Opposition newspapers, including Zaman, Taraf, and Cumhuriyet, which faced similar cyberattacks, pointed to Ankara as the source of these attacks, raising discussions about the state and service providers’ negligence and potential involvement (Akyildiz, 2014).
A similar situation recurred during the 2015 general elections when concerns about the Erdogan regime manipulating election results intensified. On the evening of June 7, 2015, during the ballot counting, a cyberattack targeted the Cihan News Agency, disrupting its services. Zaman newspaper reported that the attack was linked to a special team established within TÜBİTAK, with connections to foreign countries established through TÜBİTAK computers and botnet networks used to direct the attacks and obscure the source (Internet Haber, 2015).
Starting from 2009, Erdoganist hackers also targeted numbers of western countries whose politicians expressed anti-Islamic views or criticized Erdogan regime in Turkey (Souli, 2018; Hern, 2017; Space Watch, 2018; Goud, 2018). In a striking illustration of how cyber activities often align with geopolitics, the Turkish hacktivist group Ayyildiz Tim faced accusations of hacking and taking control of the social media accounts of prominent US journalists in 2018. Their aim was to disseminate messages in support of President Erdogan. These cyber incidents unfolded amidst a period of notably strained US-Turkish ties. Additionally, Turkey grappled with an economic crisis, widely attributed to Erdogan’s ill-advised economic policies, although he consistently laid the blame on the US. The US-based cybersecurity firm CrowdStrike exposed the activities of Ayyildiz Tim, a group active since 2002. There is evidence indicating potential ties between Ayyildiz Tim and security forces loyal to Erdogan (Space Watch, 2018; Goud, 2018).
In January 2023, a Turkish hacker collective known as “Türk hackteam” initiated a call for cyberattacks targeting Swedish authorities and banks, coupled with a warning, stating, “If you desecrate the Quran one more time, we will begin spreading sensitive personal data of Swedes” (Hull, 2023). Several prominent Swedish websites reportedly suffered temporary outages due to DDoS attacks, with responsibility for these attacks claimed by the Turkish hacker group Türk Hack Team. Identifying themselves as nationalists, they alleged their lack of affiliation with Erdogan, who had previously stated that Sweden should not expect Turkish NATO support after the Quran incident (Skold, 2023).
Meanwhile, in the lead-up to the 2023 presidential elections, Turkey’s primary opposition leader and presidential candidate, Kilicdaroglu, made allegations that the ruling AKP had engaged foreign hackers to orchestrate an online campaign against him, employing fabricated videos and images (Turkish Minute, 2023a).
Demonstrating the Erdogan regime’s keen interest in hacking endeavors, an annual event known as “Hack Istanbul” has been hosted by Turkey since 2018. This unique competition challenges hackers worldwide with sophisticated real-world cyberattack scenarios crafted under the guidance of leading global experts (Hurriyet Daily News, 2021). The Turkish Presidency’s Digital Transformation Office has been responsible for organizing these hacking competitions, which offer substantial financial rewards. Furthermore, the regime has initiated Cyber Intelligence Contests as part of its training campaigns, effectively expanding the pool of individuals with cybersecurity skills (Cyber Intelligence Contest, 2021).
Conclusion
The evolution of information controls in Turkey began with first-generation techniques, such as censorship and content filtering, aimed at restricting access to specific websites and online platforms. However, as technology advanced, the government adopted more sophisticated methods. One prevalent tool has been the instrumentalization of legislation, through which laws have been enacted to curtail online freedoms and enable state surveillance. Additionally, regulatory bodies, originally intended to ensure fair practices, have been weaponized to enforce censorship and impose restrictions, eroding the independence of online platforms. Furthermore, the Turkish government has resorted to tactics like shutdowns, throttling, and content removal requests to suppress dissenting voices and control the flow of information.
In the third generation of information controls, Turkey has focused on establishing a sovereign national cyber-zone characterized by extensive surveillance practices. Advanced technologies have been employed to monitor online activities, creating a pervasive atmosphere of surveillance and curtailing privacy rights. Critical netizens, including activists, journalists, and dissidents, have faced targeted persecution, enduring harassment, intimidation, and legal prosecution to silence opposition and stifle open discourse. Moreover, regime-sponsored (dis)information campaigns have played a significant role in shaping the digital narrative.
Central to the concept of digital authoritarianism in Turkey is the extensive deployment of internet bots and automated tools. The use of internet bots, fake accounts, and orchestrated campaigns for political manipulation is indeed pervasive in Turkey, particularly in shaping public opinion, supporting government policies, and undermining political opponents. Numerous studies have revealed the extensive deployment of automated bots by the Erdogan regime and its supporters to amplify government propaganda, counter anti-government narratives, and create a false perception of grassroots support.
The deployment of individuals known as “AKtrolls” has been used to disseminate pro-government propaganda and attack dissenting voices. Automated bots have been utilized to amplify certain narratives while suppressing opposing viewpoints, distorting the digital discourse, and undermining the integrity of online discussions.
As the Turkish political landscape evolves, the role of social media in shaping public opinion and electoral outcomes remains a critical concern. The elections intensified the battle for online influence, with the government attempting to purchase accounts and engage with dark web groups. The landscape of online manipulation in Turkey is further complicated by the prevalence of fake accounts, compromised profiles, and silent bots that intermittently generate and promote false trends. Silent accounts, which quickly delete tweets, evade detection, making it challenging to identify them.
Additionally, the manipulation of social media in Turkey has a transnational dimension, with instances of foreign interference and coordinated campaigns coming to light. The use of extensive networks of fake or compromised accounts to amplify certain political views or spread false information on social media has become increasingly prevalent, particularly during politically sensitive periods like elections. Many of these coordinated networks are dedicated to promoting pro-Erdogan perspectives, and the regime occasionally presents their artificial presence as evidence of grassroots support for its policies.
Funding: This research was funded by Gerda Henkel Foundation, AZ 01/TG/21, Emerging Digital Technologies and the Future of Democracy in the Muslim World.
Bulut E. & E. Yoruk. (2017). “Digital populism: Trolls and political polarization of Twitter in Turkey,” International Journal of Communication. 11: 4093–4117.
Diamond, Larry. (2021). “Rebooting Democracy.” Journal of Democracy. 32: 179–83;
Dragu, Tiberiu & Yonatan Lupu. (2021). “Digital authoritarianism and the future of human rights.” International Organization. 75(4), 991-1017;
Ebert H. & T. Maurer. (2013). “Contested cyberspace and rising powers,” Third World Quarterly. 34(6), 1054–1074. doi:10.1080/01436597.2013.802502.
Eldem, Tuba. (2020). “The Governance of Turkey’s Cyberspace: Between Cyber Security and Information Security.” International Journal of Public Administration. Vol. 43, No. 5, 452–465 https://doi.org/10.1080/01900692.2019.1680689
Elmas, Tugrulcan. (2023). “Analyzing Activity and Suspension Patterns of Twitter Bots Attacking Turkish Twitter Trends by a Longitudinal Dataset.” ArXiv. April 16, 2023. https://arxiv.org/pdf/2304.07907.pdf
Grossman, Shelby; Fazil Alp Akis, Ayça Alemdaroğlu, Josh A. Goldstein & Katie Jonsson. (2020). “Political Retweet Rings and Compromised Accounts: A Twitter Influence Operation Linked to the Youth Wing of Turkey’s Ruling Party,” Stanford Internet Observatory, June 11, 2020.
Howells, Laura and Laura A. Henry. (2021). “Varieties of Digital Authoritarianism: Analyzing Russia’s Approach to Internet.” Governance Communist and Post-Communist Studies. 54: 1–27.
Karataş, Duygu & Erkan Saka. (2017). “Online political trolling in the context of post-Gezi social media in Turkey.” International Journal of Digital Television. (2017). Volume 8 Number 3. doi: 10.1386/jdtv.8.3.383_1
Kocer, Suncem. (2015). “From the ‘Worst Menace to Societies’ to the ‘Robot Lobby’: A Semantic Views of Turkish Political Rhetoric on Social Media.” In: Lemi Baruh & Banu Baybars Hawks (Eds.). New Media Politics: Rethinking Activism and National Security in Cyberspace Account. Newcastle upon Tyne: Cambridge Scholars Publishing.
Lapowsky, Issie. (2015). “Why Twitter Is Finally Taking a Stand against Trolls,” Wired, April 21, 2015. https://www.wired.com/2015/04/twitter-abuse/ (accessed on May 11, 2023).
Morozov, Evgeny. (2012). The Net Delusion: The Dark Side of Internet Freedom. New York: Public Affairs.
Polyakova, Anna & Chris Meserole. (2019). “Exporting digital authoritarianism: The Russian and Chinese models,” Policy Brief, Democracy and Disorder Series. Washington, DC: Brookings. 1-22.
Saka, Erkan. (2021). “Networks of Political Trolling in Turkey after the Consolidation of Power Under the Presidency” (pp. 240-255). In: Digital Hate. The Global Conjuncture of Extreme Speech. Indiana University Press. https://iupress.org/9780253059253/digital-hate/
Sherman, Justin. (2021). “Digital Authoritarianism and Implications for US National Security,” The Cyber Defense Review. 6:1. 107- 118.
Timucin, Fatma. (2021). 8-Bit Iron Fist: Digital Authoritarianism in Competitive Authoritarian Regimes: The Cases of Turkey and Hungary. (Master’s thesis, Sabanci University, Istanbul, Turkey, 2021). https://research.sabanciuniv.edu/42417/ (accessed on May 11, 2023).
Topak, Ozgün E. (2019). “The authoritarian surveillant assemblage: Authoritarian state surveillance in Turkey.” Security Dialogue. 50: 454–72.
Tucker, Joshua A.; Yannis Theocharis, Margaret E. Roberts, and Pablo Barberá. (2017). “From Liberation to Turmoil: Social Media and Democracy.” Journal of Democracy. 28: 46–59.
Varnalı K. and V. Görgülü. (2015). “A social influence perspective on expressive political participation in Twitter: The case of #occupyGezi.” Information, Communication & Society. 18(1): 1–16.
Yanardagoglu, Eylem. (2018). “Communication as Political Action: Gezi Park and Online Content Producer.” In: Alternative Media in Contemporary Turkey: Sustainability, Activism, and Resistance. Murat Akser & Victoria McCollum (eds.). Rowman & Littlefield Publishers.
Yeşil, Bilge. (2016). Media in New Turkey: The Origins of an Authoritarian Neoliberal State. Champaign: University of Illinois Press.
Yesil, Bilge; Efe Kerem Sözeri & Emad Khazraee. (2017). “Turkey’s Internet Policy After the Coup Attempt: The Emergence of a Distributed Network of Online Suppression and Surveillance.” Internet Policy Observatory. February 28, 2017. https://repository.upenn.edu/internetpolicyobservatory/22 (accessed on May 11, 2023).
Zagidullin, Marat; Aziz, Nergis and Kozhakhmet, Sanat. (2021). “Government policies and attitudes to social media use among users in Turkey: The role of awareness of policies, political involvement, online trust, and party identification.” Technology in Society. https://doi.org/10.1016/j.techsoc.2021.101708
Yilmaz, Ihsan. (2023). “Digital Authoritarianism and Religion in Democratic Polities of the Global South.” In: Ihsan Yilmaz (eds.), Digital Authoritarianism and Its Religious Legitimization: The Cases of Turkey, Indonesia, Malaysia, Pakistan, and India. Singapore: Palgrave Macmillan.
Yilmaz, Ihsan & Fan Yang. (2023). “Digital Authoritarianism and Religious Populism in Turkey.” In: Ihsan Yilmaz (eds.) Digital Authoritarianism and Its Religious Legitimization: The Cases of Turkey, Indonesia, Malaysia, Pakistan, and India. Singapore: Palgrave Macmillan.
Yilmaz, Ihsan and Galib Bashirov. (2018). “The AKP after 15 years: Emergence of Erdoganism in Turkey.” Third World Quarterly 39: 1812–30.
Yilmaz, Ihsan; Akbarzadeh, Shahram & Bashirov, Galib. (2023). “Strategic Digital Information Operations (SDIOs).” Populism & Politics (P&P). European Center for Populism Studies (ECPS). September 10, 2023. https://doi.org/10.55271/pp0024a
Abstract
In this paper, we introduce the concept of “Strategic Digital Information Operations” (SDIOs), discuss the tactics and practices of the SDIOs, explain the main political goals of state and non-state actors in engaging with SDIOs at home and abroad, and suggest avenues for new research. We argue that the concept of the SDIOs presents a useful framework to discuss all forms of digital manipulation at both domestic and international levels organized by either state or non-state actors. While the literature has examined the military-political impacts of the SDIOs, we still don’t know much about societal issues that the SDIOs influence such as emotive political mobilization, intergroup relations, social cohesion, trust, and emotional resonance among target audiences.
By Ihsan Yilmaz, Shahram Akbarzadeh* and Galib Bashirov**
Introduction
In recent years, the convergence of the digital realm and political sphere has created a dynamic environment where a wide range of state and non-state actors try to leverage digital platforms to pursue their political goals. This trend includes diverse cases, spanning from the continual targeting of autonomous media establishments in nations like Egypt and Turkey to the deliberate manipulation of electoral processes in democratic countries such as the United States (US) and the United Kingdom (UK), while also extending its reach to include extremist groups such as ISIS who use digital platforms for their propaganda endeavours (see Ingram, 2015; Theohary, 2011). These “Strategic Digital Information Operations (SDIOs),” as we call them here, refer to efforts by state and non-state actors to manipulate public opinion as well as individual and collective emotions by using digital technologies to change how people relate and respond to events in the world. As such, SDIOs involve deliberate alteration of the information environment by social and political actors to serve their interests.
We use this term – SDIOs – because it combines several facets of digital manipulation at both national and international levels. “Information Operations” is a term social media companies like Facebook have adopted to describe organized communicative activities that attempt to circulate problematically inaccurate or deceptive information on their platforms. These activities are strategic because rather than being purely communicative, they are driven by the political objectives of state and non-state actors (see Starbird et al., 2019; Hatch, 2019). We add the concept ‘digital’ to emphasize the distinction between the old ways of information operations and the new ones that operate almost specifically in the digital realm and use much more sophisticated tools such as artificial intelligence (AI), machine learning, and algorithmic models to disseminate information. Of course, some aspects of digital information operations have been carried over from the non-digital environments that have been mastered over the past century. Nonetheless, the affordances of the digital environment have provided not only radically new and sophisticated tools but also an opportunity for much wider dissemination and reach for strategic information operations.
The SDIOs involve various tactics used by political groups who try to shape the online environment in their favour. Their goal is to control the flow of information, where politics and social actions meet. We note that these tactics can cross borders between countries: these operations don’t just target people within a country; they also aim to reach people in other nations. In this article, we briefly discuss the tactics and practices of the SDIOs, explain the main political goals of state and non-state actors in engaging with SDIOs at home and abroad, and present venues for new research.
Tactics and Practices of SDIOs
As researchers started to examine the many ways in which state actors have tried to manipulate domestic and foreign public opinion in their favour, disinformation has become the main focus of their analysis with an emphasis on spreading fake news, conspiracy theories, and outright lies. Various forms of disinformation have been used in order to create doubt and confusion among the consumers of malign content. Spreading conspiracy theories makes people doubt the truth, which weakens trust in social and political institutions. Moreover, sharing fake news or other fabricated stories weaves a web of lies that shapes what people think. While the latter has certainly been effective in manipulating public opinion, observers have noted recently a shift in emphasis from disinformation to more sophisticated and less discernable means of manipulation.
The aforementioned shift has taken place due to the growing awareness of the fake news and lies in digital environments on the part of both users and digital platforms. As platforms such as Twitter and Facebook have increased their clampdown on such content and as users have become more capable in spotting them, state and non-state actors have moved to more sophisticated means of digital manipulation where content is carefully designed to change how people see things. For example, instead of outright lies or fake news, strategic actors have started to spread half-truths that create a specific version of events by conveying only part of the truth (Iwuoha, 2021). Moreover, these actors have made massive investments on smart public relations messages and clever advertisements to prop up their messages. An important tactical goal has become not simply to deceive the audience but more so to ‘flood’ the information space with not just false, but also distracting, irrelevant, and even worthless pieces of information with the help of trolls and bots, hired social media consultants and influencers, as well as genuine followers and believers (Mir et al., 2022).
For example, observers noted how a prominent strategy of the Chinese domestic propaganda is to ‘drown out’ dissident voices through incessant propagation of the government messaging, a campaign called ‘positive energy’ (Chen et al., 2021). The Orwellian campaign involved not only the use of a massive influencer and troll army to promote government messaging but also the forceful testimony of the Uyghur people. In one instance for example, seven people of Uighur descent were brought to a press conference to share their stories of “positive energy” and made-up hype against China to disprove allegations of mistreatment by the Chinese government (Mason, 2022). As such, SDIOs encompass all these tactics and practices rather than merely focusing on means of disinformation that have so far dominated the research into digital manipulation. It also shows the ability of SDIOs to adapt and change over time based on the operational context. While disinformation through direct messages remains a consistent approach, actors increasingly move towards using subtler tactics to create distractions and cause confusion among their audience, which weakens the basis of well-informed political discussions. For example, the Egyptian government has flooded the information space with the news of the ‘electricity surplus’ and the future of Egypt as ‘an electricity carrier for Europe’ amidst an ongoing economic crisis in the country that has left millions of Egyptians without access to reliable electricity (Dawoud, 2023).
At the heart of discussions about strategic digital information operations lies the creation of narratives carefully designed to connect with their intended audiences. These narratives aren’t random; instead, they’re tailored to match how the recipients think. The interaction between these narratives and their audiences involves psychology, culture, and emotions. How the audience reacts depends not only on how convincing the content is, but also on their existing beliefs, biases, and cultural contexts (Bakir and McStay, 2018). While some people might approach these narratives with doubt, others could be drawn into self-reinforcing cycles, giving in to confirmation bias and manipulation. This back-and-forth underlines the close link between creators and consumers of strategic narratives in the digital era.
Among the many narrative tropes that SDIOs use, we want to note the increasing role ascribed to historical and religious notions to influence public opinion and political discussions. SDIOs mix past grievances and religious beliefs to make their stories more impactful and believable. Bringing up old injustices can stir up strong patriotic feelings or strengthen shared memories. At the same time, using religious stories can tap into deeply held beliefs, making people think there is divine approval or a connection to common values. This blend of history and religion makes their stories powerful and emotional, making them more effective. In Turkey, for example, the state authorities have disseminated victimhood narratives that largely rested on conspiracy theories and half-truths in order to legitimize their rule and quash dissent (Yilmaz and Shipoli, 2022). Research has noted that Islamic religious ideas and the reconstructed history of the Ottoman collapse have been strategically inserted into such narratives to elevate their influence among the Turkish masses (Yilmaz and Albayrak, 2021; Yilmaz and Demir, 2023).
Finally, it’s important to stress that these information operations aren’t always coordinated by automated bots or pre-planned campaigns. Sometimes, they happen naturally through implicit coordination among various participants, which makes the situation even more complex. Starbird et al.’s (2020) research demonstrates that online information operations involve active participation by human actors. The messages these operations spread are disseminated by utilizing online communities and various sources of information. As such SDIOs can be ‘cooperative’ endeavours in that they do not always rely on mere “bots” and “trolls,” but also encompass the contribution of online crowds (both knowingly and unknowingly) in the propagation of false information and political propaganda. For example, during the Russian information operations in the wake of the 2016 US Presidential elections, agents of the Internet Research Agency (RU-IRA) based in St. Petersburg worked together through the operation of more than 3.000 accounts that presented themselves as people and organizations belonging to the American political spectrum (such as the Black Lives Matter and the Patriotic Journalist Network). While undertaking such ‘orchestrated’ activity, the RU-IRA also managed to integrate organic communities by impersonating activists within those online communities, building networks within those communities, and even directly contacting ‘real’ activists. In some cases, RU-IRA agents directly collaborated with activists to organize physical protests in the US (see Walker, 2017).
Goals of SDIOs
Illustration: Shutterstock.
SDIOs span both national and international contexts, targeting domestic and foreign audiences through an array of tactics to achieve the political goals of their organizers. Looking at the domestic realm, SDIOs have influenced the functioning of the government and social and political institutions. In many instances, authoritarian governments use digital platforms to influence individuals’ opinions through stories, emotions, and viewpoints that are carefully designed to resonate with specific groups of the population. Their toolkit includes a range of elements, such as conspiracy theories that legitimize a government policy or deflect attention from a government failure, or that create doubt on the arguments of the opposition parties and social actors. Governments may also present narratives where they portray themselves as victims, manipulate facts, and spread distorted statements. For example, in Egypt, the government’s digital narratives have portrayed independent media outlets as agents of Western conspiracies designed to infiltrate and destroy the Egyptian social and political fabric. Similarly, the civilian presidential candidates against President Sisi have been labelled Western puppets created to destabilize Egypt (Michaelson, 2018). In China, the CCP government has used media management platforms such as iiMedia to control public opinion, including providing early warnings for ‘negative’ public opinions and helping guide the promotion of ‘positive energy’ online (Laskai, 2019).
It must also be noted that these narratives, particularly those that employ victimhood tropes, are strategically employed to trigger various emotions among the masses. In Turkey, for example, the Erdogan regime has consistently abused a victimhood claim that rested mainly on the already-existing emotions of the masses such as envy, disgust, humiliation, hatred, anxiety, and anger (Yilmaz, 2021). These emotions are triggered and aroused by government elites as well as government-controlled media in order to legitimize the Erdogan regime’s authoritarian rule and deflect attention from its failures (see Yilmaz, 2021; Tokdogan, 2019).
While both sets of actors pursue political goals through digital manipulation, there are certain differences between state and non-state actors when it comes to utilizing the SDIOs. On the one hand, the state actors tend to be well-resourced and possess good infrastructure of human and technological capital. They tend to have access to a range of digital tools to be used in domestic and foreign contexts, whether to silence the critics and legitimize their rule at home or destabilize their adversaries and extend their geopolitical influence abroad. They tend to carefully plan campaigns to infiltrate foreign information systems, reshape stories, and generate social conflicts, all of which take long-term thinking and strategic foresight. On the other hand, non-state actors, including hacktivist groups and extremist organizations, may lack resources but they tend to be more adaptable to new environments. They use digital platforms to promote their causes, attract supporters, and amplify their voices. These players manoeuvre through the digital world with agility, reflecting the changing nature of the medium.
Research has noted the implications of information operations for democratization as authoritarian and populist governments have leveraged digital media’s features to advance their political objectives. The calculated manipulation of digital platforms by these actors serves as a conduit for amplifying narratives that bolster their policies, worldviews, and perspectives. Authoritarian governments utilize digital censorship and surveillance to suppress dissenting voices and exert control over digital narratives. Populist leaders, in turn, harness the immediacy and interactive nature of social media to establish direct, emotional connections with their constituents, bypassing traditional gatekeepers (Perloff, 2021). By capitalizing on the resonance of online platforms, these actors perpetuate narratives that exploit societal grievances, positioning themselves as advocates for the marginalized while vilifying opposing viewpoints (Postill, 2018).
A Specific, International SDIO: Sharp Power
SDIOs undergo a transformation into tools of geopolitical orchestration and influence projection. In this context, digital strategies manifest as instruments designed to strike a chord with international audiences. They sow seeds of social and political division in target countries that perpetrators try to destabilize. These efforts generate support for both domestic and foreign policy objectives of the perpetrators, often exceeding the boundaries of the conventional notion of soft power and giving rise to what is termed “sharp power” (Walker, 2018). This variant of influence extends beyond the benign strategies commonly associated with “soft power,” taking on a more coercive character where “it seeks to pierce, penetrate, or perforate the political and information environment” (Walker, 2018: 12; Fisher, 2020; Elshaw and Alimardani, 2021).
The emergence of “sharp power” has denoted a significant shift in the dynamics of external influence, as digital platforms are being used to coercively reshape geopolitical interactions between major powers such as the US, China, and Russia, as well as middle powers such as Australia, Turkey, and Egypt. For example, over the last decade, Australia, its public authorities, media entities, and civil society organizations have been systematically targeted by Chinese sharp power operations that included lavish donations to campaigns of useful political candidates, harassment of journalists, and spying on Chinese students in university campuses (The Economist, 2017).
Social Impacts of SDIOs
The study of strategic information operations is not new as scholars noted the US and Soviet attempts at influencing each other’s information environment since the start of the Cold War (see Martin, 1982). Nonetheless, we note that the strategic information operations have been used mostly in two fields of study: military influence and social media analysis, with the political science literature mostly discussing the elements of the concept without fully operationalizing it.
On the one hand, scholars working within military studies have rightly pointed out the strategic reasoning of information operations for international politics (see Rattray, 2001; Kania and Costello, 2018). For example, Kania and Costello (2018: 105) showed how the creation of the Strategic Support Force within the Chinese army structure was aimed at “dominance in space, cyberspace, and the electromagnetic domain,” thus generating synergy among these three domains, and building capacity for strategic information operations. States have also been manipulating the information environment to influence the internal affairs of their adversaries for decades. This has led to discussion of information operations as a potential threat to national security and stability (Hatch, 2019).
On the other hand, those working on social media analysis have tried to explain how these information operations have been carried out in social media environments. Researchers have identified technical means through which sophisticated tools of manipulation have been put in place in platforms such as Twitter and Facebook that led to the spread of dis/misinformation (see Starbird et al., 2019). Among other things, this literature has also helped us to understand why certain pieces of information resonate with users and generate a response (such as those that are more surreal, exaggerated, impressive, emotional, persuasive, clickbait, and shocking images tend to generate better results).
The political science literature has noted various ways in which specific forms of mis/disinformation have affected political discussions in mostly democratic countries without utilizing the SDIOs as an umbrella term. In democratic contexts, the rapid dissemination of misinformation and divisive narratives poses a substantial threat, corroding informed decision-making and hindering the robust exchange of ideas. Trust, a cornerstone of functional democracies, becomes fragile as manipulation proliferates, eroding institutional credibility and undermining the fundamental tenets of democratic governance. For example, in the US, the Russian information operations around the 2016 Presidential Elections targeted key political institutions such as the political parties, the Congress, and the Constitutional Court through hacking, manipulative messaging, and social media campaigns, leading to erosion of trust among American citizens on these institutions (see Benkler et al., 2018).
While the literature covered such issues, we note that social aspects have not received as much discussion so far. We have seen that the SDIOs create significant social impact in terms of social cohesion, polarization, intergroup relations, and radicalization just to name a few. However, the literature’s discussion of these concepts has been limited to technical or political aspects. For example, when the literature examines polarization, they either try to demonstrate how these operations polarize the discourse on the internet, or they focus on political polarization (e.g. between the left and the right, or the majority and the minorities) (e.g., Howard et al., 2018; Neyazi, 2020) while overlooking the wider societal polarization and corruption. Moreover, we need further investigations into how social media platforms amplify the impact of information operations on group dynamics, specifically, whether the content on social media exacerbates polarization and reinforces group identities. This is premised on the fact that the impact of SDIOs extends beyond individual psychology, permeating the collective fabric of societies and democratic institutions. By exploiting digital platforms, these operations can foster polarization, exacerbate existing divisions, and undermine the foundations of social cohesion.
Impacts of SDIOs on Individual and Collective Emotions
Illustration: Shutterstock / Vchal.
In the context of social issues, an important underexplored aspect is the emotional dimension. The SDIOs aim to provoke a wide range of emotions among their targets, including negative, positive, and ambivalent feelings. They aim to generate these emotional responses to achieve various political goals such as gaining support for their political causes, undermining opposing groups, eroding trust in society, marginalizing minority groups, and making people question the credibility of independent media outlets. These operations are usually planned to trigger specific emotional reactions that align with the intentions of the perpetrators. For example, Ghanem et al. (2020) found that the propagation of fake news in social media aims to manipulate the feelings of readers “by using extreme positive and negative emotions, triggering a sense of ‘calmness’ to confuse the readers and enforce a feeling of confidence.” However, we need further research to understand how such emotional responses generate social impacts such as intergroup resentment, xenophobic fear, and anger, potentially leading to societal dissent and upheaval. Conversely, positive emotions like empathy and camaraderie can foster social unity and rally support around social causes. Therefore, the strategic coordination of emotional experiences stands as an important dimension of SDIOs that needs further research.
The final underexplored area we want to emphasize pertains to the content of strategic narratives, including the social and political reasons behind their resonance within target societies. For example, in addition to the content of conspiracy narratives, new research needs to identify why and how certain narratives work in specific social contexts and not in others. Research needs to investigate how historical events, cultural norms, and collective memories shape the reception and resonance of strategic narratives. For instance, narratives that invoke historical grievances might gain traction in societies with unresolved historical conflicts. Further research can explore how strategic narratives tap into individuals’ sense of identity and belonging. Narratives that align with or reinforce a group’s identity can gain more resonance, as they validate existing beliefs and foster a sense of unity.
Conclusion
In this paper, we introduced the concept of the Strategic Digital Information Operations (SDIOs), discussed the tactics and practices of the SDIOs, explained the main political goals of state and non-state actors in engaging with SDIOs at home and abroad, and presented avenues for new research. We highlighted that the concept of the SDIOs present a useful framework to discuss all forms of digital manipulation at both domestic and international levels organized by either state or non-state actors. We noted that while the literature has examined military-political impacts of the SDIOs, we still don’t know much about societal issues that the SDIOs influence such as intergroup relations, social cohesion, trust, and emotional resonance among target audiences.
Understanding how audiences perceive and react forms the foundation for generating effective countermeasures against the harmful impacts of SDIOs. Initiatives aimed at promoting digital literacy, critical thinking, and the ability to discern media authenticity will empower individuals to navigate the potentially deceptive terrain of manipulated information. Additionally, creating transparency and accountability in algorithms that digital platforms use and rely on, along with dedicated fact-checking initiatives, will enhance the tools necessary to distinguish between truth and deceit. Furthermore, collaborative efforts involving governments, technology companies, and civil society entities can serve as a strong defense against the corrosive effects of manipulation, safeguarding the integrity of democratic discourse and the informed participation of citizens.
Finally, we note that the examination of SDIOs demands a comprehensive range of methodologies that arise from various disciplines including, quantitative and qualitative analysis that aims at revealing patterns of engagement and shifts in emotions, tracing the pathways of information dissemination, and mapping the networks of influence. Ethnographic investigations that delve into the personal experiences of participants can provide a human-centred perspective, showing the psychological, emotional, and cognitive dimensions of manipulation. Effective collaboration among technology experts, academic scholars, and policymakers can foster a deeper understanding of digital operations work and generate influence.
Funding: This research was funded by Gerda Henkel Foundation, AZ 01/TG/21, Emerging Digital Technologies and the Future of Democracy in the Muslim World.
(*) Dr. Shahram Akbarzadeh is Convenor of Middle East Studies Forum (MESF) and Deputy Director (International) of the Alfred Deakin Institute for Citizenship and Globalisation, Deakin University (Australia). He held a prestigious ARC Future Fellowship (2013-2016) on the Role of Islam in Iran’s Foreign Policy-making and recently completed a Qatar Foundation project on Sectarianism in the Middle East. Professor Akbarzadeh has an extensive publication record and has contributed to the public debate on the political processes in the Middle East, regional rivalry and Islamic militancy. In 2022 he joined Middle East Council on Global Affairs as a Non-resident Senior Fellow. Google Scholar profile: https://scholar.google.com.au/citations?hl=en&user=8p1PrpUAAAAJ&view_op=list_works Twitter: @S_Akbarzadeh Email: shahram.akbarzadeh@deakin.edu.au
(**) Dr Galib Bashirov is an associate research fellow at Alfred Deakin Institute for Citizenship and Globalization, Deakin University, Australia. His research examines state-society relations in the Muslim world and US foreign policy in the Middle East and Central Asia. His previous works have been published in Review of International Political Economy, Democratization, and Third World Quarterly. Google Scholar profile: https://scholar.google.com/citations?user=qOt3Zm4AAAAJ&hl=en&oi=ao Email: galib.bashirov@deakin.edu.au
Bakir, V. & McStay, A. (2018). “Fake news and the economy of emotions: Problems, causes, solutions.” Digital Journalism, 6(2), 154-175.
Benkler, Y., Faris, R. & Roberts, H. (2018). Network propaganda: Manipulation, disinformation, and radicalization in American politics. Oxford University Press.
Chen, X., Valdovinos Kaye, D. B. & Zeng, J. (2021). “#PositiveEnergy Douyin: Constructing “playful patriotism” in a Chinese short-video application.” Chinese Journal of Communication, 14(1), 97-117.
Elswah, M. & Alimardani, M. (2021). “Propaganda Chimera: Unpacking the Iranian Perception Information Operations in the Arab World.” Open Information Science. 5(1), 163-174.
Fisher, A. (2020). Manufacturing Dissent: The Subtle Ways International Propaganda Shapes Our Politics (Doctoral dissertation, The George Washington University).
Ghanem, B., Rosso, P., & Rangel, F. (2020). “An emotional analysis of false information in social media and news articles.” ACM Transactions on Internet Technology (TOIT), 20(2), 1-18.
Hatch, B. (2019). “The future of strategic information and cyber-enabled information operations.” Journal of Strategic Security, 12(4), 69-89.
Ingram, H. J. (2015). “The strategic logic of Islamic State information operations.” Australian Journal of International Affairs, 69(6), 729-752.
Iwuoha, V. C. (2020). “‘Fake News’ and ‘Half-truths’ in the Framings of Boko Haram Narrative: Implications on International Counterterrorism Responses.” The International Journal of Intelligence, Security, and Public Affairs, 22(2), 82-103.
Kania, E. B., & Costello, J. K. (2018). “The strategic support force and the future of Chinese information operations.” The Cyber Defense Review, 3(1), 105-122.
Martin, L. J. (1982). “Disinformation: An instrumentality in the propaganda arsenal.” Political Communication, 2(1), 47-64.
Mir, Asfandyar, Tamar Mitts and Paul Staniland. (2022). “Political Coalitions and Social Media: Evidence from Pakistan.” Perspectives on Politics, 1-20.
Neyazi, T. A. (2020). “Digital propaganda, political bots and polarized politics in India.” Asian Journal of Communication, 30(1), 39-57.
Perloff, R. M. (2021). The dynamics of political communication: Media and politics in a digital age. Routledge.
Postill, J. (2018). “Populism and social media: a global perspective.” Media, culture & society, 40(5), 754-765.
Rattray, G. J. (2001). Strategic warfare in cyberspace. MIT press.
Starbird, K., Arif, A. & Wilson, T. (2019). “Disinformation as collaborative work: Surfacing the participatory nature of strategic information operations.” Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1-26.
Theohary, C. A. (2011). Terrorist use of the internet: Information operations in cyberspace. Diane Publishing.
Tokdoğan, N. (2020). “Reading politics through emotions: Ontological ressentiment as the emotional basis of current politics in Turkey.” Nations and Nationalism, 26(2), 388-406.
Walker, C. (2018). “What Is ‘Sharp Power’?” Journal of Democracy, 29, 9.
Yilmaz, I. (2021). Creating the Desired Citizen: Ideology, State and Islam in Turkey. Cambridge University Press.
Yilmaz, I. & Albayrak, I. (2021). “Instrumentalization of Religious Conspiracy Theories in Politics of Victimhood: Narrative of Turkey’s Directorate of Religious Affairs.” Religions, 12(10), 841.
Yilmaz, I. & Demir, M. (2023). “Manufacturing the Ummah: Turkey’s transnational populism and construction of the people globally.” Third World Quarterly, 44(2), 320-336.
Yilmaz, I. & Shipoli, E. (2022). “Use of past collective traumas, fear and conspiracy theories for securitization of the opposition and authoritarianisation: the Turkish case.” Democratization, 29(2), 320-336.
Ahmed, Zahid Shahab; Yilmaz, Ihsan; Akbarzadeh, Shahram & Bashirov, Galib. (2023). “Digital Authoritarianism and Activism for Digital Rights in Pakistan.” European Center for Populism Studies (ECPS). July 20, 2023. https://doi.org/10.55271/rp0042
In recent years, Pakistan has witnessed the emergence of digital authoritarianism as a governing strategy. This involves using digital technologies and surveillance mechanisms to control and monitor online activities. The government has implemented legislation like the Prevention of Electronic Crimes Act (PECA) to regulate cyberspace. However, the vague definitions of cybercrime within PECA and the broad surveillance powers granted to agencies such as the FIA and ISI raise apprehensions about potential abuses of power.
By Zahid Shahab Ahmed*, Ihsan Yilmaz, Shahram Akbarzadeh** and Galib Bashirov***
Executive Summary
With the Pakistani government implementing rules and regulations to control the online sphere, particularly through the Prevention of Electronic Crimes Act (PECA), digital authoritarianism has emerged as a significant governance tool in the country. Concerns have been raised regarding potential abuses stemming from the vague definitions of cybercrime within PECA and the extensive monitoring authority granted to intelligence services. However, despite the rise of digital authoritarianism, a countervailing force exists. Pakistan’s judiciary has displayed resistance, and the nation boasts a robust civil society that includes human rights organizations focusing on digital rights. These groups express concerns regarding data security, privacy regulations, and the internet access of marginalized communities. This study aims to examine the dynamics of digital authoritarianism in Pakistan and evaluate the role of civil society organizations in promoting and protecting digital rights.
Initially, communications in Pakistan were governed by colonial-era legislation, such as the Pakistan Telecommunication (Re-organization) Act of 1996 and the Telegraph Act of 1885. The Fair Trial Act of 2013 enabled the extensive collection of evidence through monitoring. These regulations, coupled with the absence of a comprehensive digital governance bill, have facilitated continuous online surveillance. Pakistan has witnessed remarkable growth in internet penetration, with approximately one-third of the population now having internet access.
In 2016, Pakistan introduced the Prevention of Electronic Crimes Act (PECA) to address internet governance. The act imposes severe penalties for various offences, including hacking, cyberstalking, and cyberterrorism. However, concerns have been raised regarding issues such as misuse, limitations on expressive rights, and privacy violations. PECA grants increased authority to institutions like the Pakistan Telecommunication Authority (PTA) and the Federal Investigation Agency (FIA) for digital surveillance and prosecution. The PTA possesses extensive powers to block and remove content, often justifying these actions based on the grounds of promoting vulgarity or corrupting the youth. Social media companies are also required to comply with specific regulations.
Pakistan benefits from a strong network of civil society organizations that actively collaborate with international counterparts to raise awareness about digital rights. Within Pakistan, several prominent organizations are dedicated to advocating for digital rights, internet freedom, privacy, and digital literacy.
The Digital Rights Foundation is a notable non-profit organization that focuses on promoting digital rights and addressing issues such as online harassment, data security, freedom of speech, and women’s digital rights. They conduct research, provide legal support, and deliver training and awareness programs on digital security.
Bolo Bhi is another civil society organization committed to internet freedom, digital security, and open access to information. Alongside policy advocacy, research, and digital literacy initiatives, they raise public awareness about internet censorship, surveillance, and privacy concerns.
Media Matters for Democracy is a group that works on freedom of expression, digital rights, and media development in Pakistan. Through research, policy advocacy, and capacity-building initiatives, they strive to enhance online civic spaces, promote digital literacy, and safeguard digital rights.
The Internet Policy Observatory Pakistan, a research project, offers policy recommendations on issues such as data privacy, monitoring, and censorship. They track and analyse internet governance challenges in Pakistan.
Privacy International, a global organization, advocates for privacy rights and opposes intrusive monitoring practices, including in Pakistan.
These civil society organizations play crucial roles in promoting and safeguarding digital rights in Pakistan, both through local advocacy efforts and international collaborations. These organizations actively engage in research, lobbying, and capacity-building initiatives to interact with politicians, raise public awareness, and protect digital rights in Pakistan. They also address the issue of inadequate internet access, particularly in rural and underserved areas. Their initiatives serve as a reminder of the significance of inclusive policies, digital literacy programs, bridging the digital divide, and ensuring that technological advancements are guided by human rights principles.
By conducting research, these organizations generate valuable insights into the challenges and opportunities related to digital rights in Pakistan. They utilize this research to advocate for policies that protect individuals’ online freedoms and privacy. Through lobbying efforts, they aim to influence policymakers and lawmakers, urging them to enact laws and regulations that promote digital rights and address concerns regarding internet access, privacy, and surveillance. Capacity-building initiatives undertaken by these organizations involve educating individuals and communities about digital rights, empowering them to understand their rights and navigate the online world safely. These efforts are particularly vital in rural and underserved areas, where access to information and digital literacy may be limited. The organizations’ commitment to addressing the digital divide highlights the importance of ensuring equal and affordable internet access for all citizens, regardless of their geographical location or socioeconomic status. Furthermore, these organizations emphasize the need for human rights principles to underpin technological advancements. They advocate for a responsible and ethical approach to digital development, wherein individual privacy, freedom of expression, and other fundamental rights are respected and protected.
Policy Implications
– The ambiguous definitions of cybercrime within the Prevention of Electronic Crimes Act (PECA) give rise to concerns regarding potential abuses and violations of privacy. To address these issues and ensure the protection of individual rights, it is imperative for the government to undertake a thorough review and modification of the Act. This review should involve establishing precise definitions for cybercrimes and implementing stringent regulations governing the collection, storage, and utilization of personal data. Additionally, robust data protection laws need to be put in place to safeguard the privacy of individuals.
– Given the wide-ranging monitoring authority granted to intelligence services under PECA, there is a pressing need for stronger supervision and accountability mechanisms. To prevent the potential abuse of surveillance powers and protect individual rights, it is crucial to establish independent authorities tasked with overseeing and regulating the operations of intelligence services. Transparency and accountability should be prioritized through regular audits and reporting procedures, ensuring that the actions of these services align with legal and ethical standards. By implementing robust oversight measures, we can safeguard against potential abuses and maintain the balance between security concerns and individual privacy rights.
– The resilience displayed by the judiciary in Pakistan against digital authoritarianism is commendable. However, there is still room for improvement in terms of enhancing judicial independence and equipping courts with the necessary tools to effectively address matters related to digital rights. To enhance the judiciary’s understanding of the complexities involved, it is crucial to implement judicial training programs focused on technology and digital issues. These training initiatives can provide judges with the knowledge and skills needed to navigate the intricacies of digital matters and make informed decisions. By bolstering judicial comprehension in this field, the judiciary’s ability to uphold and protect digital rights in Pakistan can be strengthened.
– The government should prioritize initiatives aimed at closing the digital divide and improving internet access, especially in rural and underserved areas. This requires making substantial investments in infrastructure development, expanding broadband availability, and reducing internet service costs. Additionally, implementing digital literacy programs is crucial to equip individuals with the necessary skills to navigate the digital realm securely and effectively. By addressing these issues, the government can empower marginalized communities, bridge the digital gap, and create equal opportunities for all citizens to participate in the digital age.
– Civil society groups in Pakistan are at the forefront of promoting digital rights. Recognizing their expertise and advocacy efforts, the government should actively engage with these organizations and seek their advice and insights in formulating rules and regulations. Collaborating with civil society groups allows for a comprehensive and inclusive approach to addressing the diverse issues and viewpoints related to digital rights. By fostering meaningful dialogue and incorporating the perspectives of various stakeholders, the government can develop more effective policies that uphold and protect digital rights in Pakistan.
– Extensive public awareness campaigns are essential to educate the public about their digital rights, emphasizing the importance of online privacy and security. These awareness efforts should be inclusive, targeting various social groups, with a particular focus on marginalized communities. The aim is to equip individuals with the knowledge and skills to protect their personal information online, recognize potential risks, and take appropriate legal action if their rights are violated. By empowering people with this information, we can foster a safer and more informed digital environment, ensuring that individuals are aware of their rights and can actively safeguard their online privacy and security.
– Pakistan should actively engage in international forums and collaborate with other nations to establish best practices and standards in addressing digital rights issues, recognizing the global nature of these challenges. By participating in these forums, Pakistan can benefit from shared knowledge and experiences, leading to more effective approaches in protecting digital rights. Collaborating with organizations like Privacy International can be instrumental in leveraging their expertise and assistance to strengthen privacy rights and oppose intrusive surveillance practices. By working together on an international scale, Pakistan can contribute to the development of robust frameworks for digital rights protection and ensure that privacy and individual freedoms are upheld in the digital realm.
Introduction
Policemen stand guard to avoid any untoward incident at Kati Pahari road as security has been tightened in city due to violence on July 06, 2011 in Karachi. Photo: Asianet-Pakistan.
Pakistan’s political landscape has been profoundly shaped by its historical trajectory, which has been marred by violence, religious divisions, and an intricate struggle for identity. The country has faced challenges in establishing a stable democracy, with periods of military dictatorship undermining democratic processes. Governance issues, such as limited freedom of the press, restricted right to protest, and interference from the military establishment, have cast a shadow on Pakistan’s democratic credentials. Furthermore, the rise of digital authoritarianism has added a new dimension to the country’s political landscape.
To govern the digital sphere, the government has implemented laws and regulations, with the Prevention of Electronic Crimes Act (PECA) serving as foundational legislation. However, concerns arise from the ambiguous definitions of cybercrime in PECA, and the extensive surveillance powers granted to civil and military intelligence agencies, raising the potential for abuse of power. The state has invested in technological capabilities for online monitoring, including web monitoring systems and social media monitoring cells. This digital surveillance infrastructure, combined with the expanded role of state institutions, reinforces the government’s control over cyberspace and its citizens’ privacy.
While digital authoritarianism is on the rise, characterized by increased surveillance, internet shutdowns, and restrictions on dissent, there exists a counterbalancing force. Pakistan’s judiciary has demonstrated resistance to encroachments on digital rights, and a robust civil society, including human rights organizations focusing on digital rights, actively advocates for the protection of digital rights in the country. These organizations voice concerns regarding data protection and privacy laws, as well as advocating for equitable access to the internet, especially for marginalized populations in regions like ex-FATA and Balochistan.
This report aims to delve into the various dynamics of digital authoritarianism in Pakistan and examine the role of civil society organizations in promoting and safeguarding digital rights within the country.
Pakistan is a country that has seen violence and brutality since its formation in 1947. Following World War II, the British Raj withdrew from the Indian Subcontinent, creating independent states of India and Pakistan. Pakistan was created as a homeland for the Muslims of the Indian Subcontinent, leading to millions of people migrating across the newly created borders between India and Pakistan. The Great Partition became the largest mass migration event of the twentieth century, but it was also marred by violent hate crimes based on faith, resulting in mass murder, mob lynching, looting, and rape of citizens on both sides of the borders (Talbot, 2009; Menon, 2012; Khan, 2017).
In addition to its traumatic inception, Pakistan has constantly struggled with its identity as a young nation-state. Despite being a ‘Muslim’ state, Pakistan at the time of its creation hosted a 23 percent population of non-Muslims, which has dwindled to 4 percent at present, and newly independent India did and still houses millions of Muslims (Mehfooz, 2021). Adding to this, the 1971 civil war led to the separation of East Pakistan from the union resulting in the creation of Bangladesh (Hossain, 2021, 2018). This breakdown of the idea of ‘a land for Muslims’ since its formation has been in jeopardy. Another interesting part is that while Pakistan was championed as a homeland for Muslims, legally it remains a highly colonial-inspired state in terms of its laws and constitution (Yilmaz, 2016). While it does use Sharia’s guiding principles to form laws, it remains democratic and not purely ‘Islamic’ in its legal and governance aspects (Yilmaz, 2016). This for many hard-line clerics and right-wing groups has added to the identity crises. The exclusive emphasis during its creation on the idea of a ‘land for Muslims’ and the later paradoxes has taken the shape of an ontological crisis for the country. Its foundation of a ‘Muslim land for Muslims’ over the year has been jolted. This existentialist crisis has led to various forms of political and social turmoil in the country for the last seven decades.
While Pakistan remains a democracy, its track record is tarnished by several military authoritarian regimes. The country has spent decades under four different military dictatorships, one of which took place during 1969-1971, under General Yahya Khan, when Pakistan was facing a civil war in East Pakistan (Sheikh and Ahmed, 2020). The latest military rule was that of General Pervez Musharraf from 1999 to 2008. While the 2008 general elections have paved the wave for a successive period of democracy the country’s ranking on democratic measures and indexes has remained murky (see Table 1). Various issues such as the lack of freedom of the press, barring the right to protest, arbitrary arrests, enforced disappearances, and gag order on media are a few of the issues apart from poor governance. The military referred to as “the establishment” regularly interferes with democratic processes in the country (Shafqat, 2019). Due to its closeness to the military establishment, Imran Khan’s government during 2018-2022 was called a hybrid regime and similar is the case now under the government-led Pakistan Democratic Movement.
Table 1 Overview of Democracy in Pakistan
The Freedom House (2023)
Overall score 37/100
Political Rights 15/40
Civil Liberties 22/60
Democracy Index (2022)
Overall score 4.13/10
Electoral process and pluralism 5.67/10
Functioning of government 5/10
Political participation 2.78/10
Political culture 2.5/10
Civil liberties 4.71/10
Human Freedom Index (2022)
Overall rank 146 out of 165 countries
Personal freedom 5.2/10
Human freedom 5.44/10
Economic freedom 6.03/10
Reporters Without Borders– World Press Freedom Index (2023)
Rank 150 out of 180 countriesScore is 39.95
Political indicator rank 139/180
Economic indicator 136/180
Legislative indicator 130/180
Social indicator 140/180
Security indicator 176/180
Data sources: (FH 2023; RWB 2023; FI 2022; Economist 2022)
In addition to these troubles, the country has been facing waves of home-grown terrorism and mushroom growth in far-right vigilantism from right-wing Islamist groups since the early 2000s. Despite successive military operations and some ‘peace building’ efforts the year 2023 marks the return of radical Islamists (Tehrik-e-Taliban Pakistan and other armed groups) in various areas of the country which results in numerous violent incidents such as conflicts with security forces or targeting civilians by suicide bombing (Jadoon, 2021). Similarly, radical Islamic groups, in both urban and rural areas have spread a culture of vigilantism or ‘mob justice’ where vandalism, physical attacks on people and at times mob lynching have become common practice to show discontent over blasphemous comments by international leaders, local politicians and many times average citizens accused of blasphemy (Yilmaz and Shakil, 2022). In addition, targeting non-Muslims and sectarian minorities in the name of ‘protection of Islam’ these violent mobs has resulted in deaths, vandalism of worship places and loss of property of the victim’s (Yilmaz and Shakil, 2022).
The overview of the country’s current political situation is quite grim. During this chaos surrounding poor governance, a tradition of authoritarianism, military interference, radicalization and disregard of human rights, the country has become a fertile ground for digital authoritarianism as well. Since the late 2000s and through the 2010s the state has replicated its oppressive tactics on the online realm as well. The last section of this report presents the history and current situation of digital authoritarianism.
The way modern humans interact with information has been fundamentally transformed by the Internet. Nowadays, anyone with a secure connection to the World Wide Web has access to a wealth of information that is freely and readily available. However, this easy access to information has led to an increasing demand for internet governance (Kurbalija, 2016), which refers to the creation and management of rules, policies, and practices in the digital realm. How internet governance is carried out varies from country to country. For example, in India, internet blackouts are commonly employed to suppress protests against the government, thereby violating citizens’ right to protest (Momen and Das, 2021). Yang and Muller’s research on China’s internet censorship demonstrates how authoritarian governments can shape public opinion and quell potential resistance through cyberspace governance. Even in Western democracies, internet governance has sparked significant debates, particularly concerning the state’s surveillance of citizens (Zajko, 2016). Despite concerns about overreaching internet governance, its implementation is justifiable as it helps combat hate speech online, restricts access to child pornography, and flags other potential criminal activities (Kurbalija, 2016). There are also various institutions involved in shaping the internet governance framework, including state institutions, telecommunication companies, international organizations, digital businesses such as social media giants, and civil society.
Pakistan is governed under the 1973 Constitution. Under this legal document, Article 14 of the Constitution of Pakistan guarantees the right to privacy to its citizens (GOP, 2012). The concept of “privacy of the home” in the article is extended and interpreted to digital communications. However, in the article, the freedom or right to privacy is subject to law under various circumstances, which means this freedom is not absolute. In addition, before the advent of the internet, the colonial law Telegraph Act from 1885 and the colonial-inspired Pakistan Telecommunication (Re-organization) Act, of 1996 governed the communication (PTA, 2023). Both Acts under clauses allowed for mass surveillance over the telephone and other forms of communication. Even before 2016, when the first law to govern digital space came into being, the Fair Trial Act, 2013 allows for the mass gathering of surveillance evidence of the accused which has enabled a culture of mass surveillance in the country. The country’s roots in colonial laws, which was itself authoritarian and its continued use of surveillance through successive laws ensured that even without a digital governance bill, their plenty of room for constantly monitoring online activities.
It is also important to understand who uses the internet in Pakistan, so it is clear who are the ones most impacted by a host of new laws and programs designed for the internet governance in Pakistan. In 2005, the internet penetration rate was 6.3 percent but it almost tripled to 15.51 percent in 2017 and was 36.7 percent at the start of 2023 (Kemp, 2023). While this rate might be lower than the global South it is a significant number as over 87.35 million Pakistanis use the internet and, nearly 4.4 million people started using the internet just between 2022 and 2023 (Kemp, 2023). This exponential growth can be explained by not only the increase in the presence of the facility but also by the fact that during the last census, conducted in 2017, nearly 40 percent of Pakistani citizens are under the age of 14 years (UNDP, 2019). This census also indicates a youth dividend in the country saying that “64 percent of the nation is younger than 30 and 29 percent of Pakistanis are between 15 and 29” (UNDP, 2019). This youth bulge can be responsible for an increased appetite for intent consumption. Despite the rapid increase in internet unsafe, it is important to remember that two-thirds of the population does not have access to the internet (Kemp, 2023). Despite this gap, over the last decade, the government has focused its energy on extending its governance to the digital realm.
It is also important to note that Islamist elements enshrined by political parties in power along with the “establishment” (military involved in the politics of the country) also reflect in digital governance. While it is common to use cyber tools to curb freedom of speech of civilian protests and political opposition, it has also become common practice to justify closing websites such as Wikipedia and platforms such as Facebook and YouTube out of respect for “Islamic values and sentiments” (Yilmaz and Saleem, 2022; Yilmaz, 2023). For instance, former Prime Minster Imran Khan has openly advocated for banning content he deems “dangerous” for Muslim youth’s consumption. He said, “Character building is very crucial in the modern tech-savvy era. The proliferation of tech gadgets and 3G/4G internet technology has made all sorts of content available to everyone […] We need to protect our youth, especially kids, from being exposed to immoral and unethical content available online” (Jamal, 2021). Khan is not alone as various other political parties have a history of banning social media platforms because of accusation of publishing “blasphemous” content. This practice of banning websites or issuing them ultimatums to remove blasphemous content has been set in motion since the first ban of Wikipedia in 2010 (Zaccaro, 2023). At the same time, the establishment has been using its public relations agency, Inter-Services Public Relations Pakistan (ISPR), to let citizens know of the bangers of “foreign” content in online space. They term this a “fifth generation warfare” which is propagated by the alleged “Jewish lobby,” “India” and other “foreign powers” to hurt and misguide Pakistani citizens (Yilmaz and Saleem, 2022). To curb this “fifth generation warfare” the ISPR has mixed jingoism with Islamist jihadist ideals to ensure that the public remains “safe” from these influences on online platforms. In such an eco-system, the state actively targets political opposition leaders, journalists, and human rights defenders, through its vast web of cyber governance which makes the state activities digital authoritarian.
Digital Governance
In 2014, the government of Pakistan addressed internet governance by developing a legal framework. This resulted in the creation of the Prevention of Electronic Crimes Act (PECA), which aimed to combat the misuse of electronic media and technology. The Act was passed by the Pakistani parliament in August 2016 and became effective in November of the same year. Pakistan, like many other countries, experienced a significant increase in the use of electronic media and technology. While these developments brought numerous benefits, they also posed challenges such as cybercrime, extremist propaganda, and hate speech on the internet. The PECA was formulated to tackle these challenges and establish a legal structure for addressing cybercrime while safeguarding the rights of citizens in the digital realm.
The Act encompasses a wide range of offences, including hacking, identity theft, cyberstalking, and cyberterrorism. It imposes strict penalties for those found guilty of committing such crimes. It is important to note that at the time, Pakistan was dealing with severe terrorism issues, and the PECA was presented as a vital measure for counterterrorism efforts. This context played a significant role in its swift approval within approximately a year and a half of the draft bill being presented in the National Parliament. However, critics have expressed concerns about the potential for abuse, the impact on freedom of expression, and the privacy implications of the Act. Some argue that it could be used to suppress dissenting voices and restrict access to information (Aziz, 2022). Criticisms also focus on the Act’s vague definitions of offences, lack of oversight, and accountability in its implementation.
PECA includes several key components of internet governance. It grants increased authority to public institutions such as the Pakistan Telecommunication Authority (PTA) and the Federal Investigation Agency (FIA) for digital surveillance, data collection, and prosecution. The PTA has broad powers under Section 37 of PECA to block and remove content based on ambiguous criteria, often justifying these actions by claiming certain platforms promote “vulgarity” or “corruption of youth.” Additionally, the Act requires social media companies operating in Pakistan to comply with the law and remove any unlawful content within 24 hours of being notified by authorities. Failure to do so can result in significant fines. The government has also mandated these companies to establish local offices in Pakistan and appoint designated representatives to collaborate with law enforcement agencies.
Pakistan has invested resources to strengthen its control over the use of digital technologies in the country. PECA established a comprehensive legal framework for identifying and addressing electronic crimes, including methods for investigation, prosecution, and adjudication. Some articles of the Act specifically focus on terrorism-related online material, including hate speech. While the implementation of PECA is viewed by the state as a crucial step in counterterrorism efforts, its controversial aspects and potential impact on freedom of expression have raised concerns. Nonetheless, the Act received unanimous approval in both the Senate and the National Assembly, as all political stakeholders recognized the significance of counterterrorism measures.
Since 2016, Pakistan has created a host of laws and amendments to existing laws to specifically govern cyberspace. The foundational law which governs cyberspace is called the Prevention Electronic Crimes Act (PECA). According to Section 21 (d) of this legislation, “Whoever intentionally and publicly exhibits or displays or transmits any information which cultivates, entices or induces a natural person to engage in a sexually explicit act, through an information system to harm a natural person or his reputation, or to take revenge, or to create hatred or to blackmail, shall be punished with imprisonment for a term which may extend to five years or with fine which may extend to five million rupees or with both” (GOP, 2016, 11). While on the surface the law seems a needed measure to curb cybercrime has cyberbullying, hacking and a tool to curb child pornography rings as well as a means to combat terrorism, it is quite ambiguous in its definition of a “cybercrime” which makes it rampart available for abuse in the hands of the oppressive state apparatus (Shad, 2022).
In addition to being vague, the laws grant the Federal Investigation Agency (FIA) unrested powers when it comes to surveillance on social media as well as grant the permission to retain data and seize digital tools (GOP, 2016). This law has paved the way for the state to heavily invest in technology to govern cyberspace. For instance, in 2018 the Pakistan Telecom Authority (PTA) purchase a “web monitoring system” from Sandvine which uses DPI technology (Ali & Jahangir, 2019). Again, the hands of FIA and the military-operated Inter-Services Intelligence (ISI) have been used to carry out mass surveillance of anyone deemed a threat via well-established social media monitoring cells as a means to counter “threats” and “terrorists” (Pasha, 2017).
In addition to legal measures, the state has redefined the role of the National Database and Registration Authority (NADRA). The agency is a national database, but its role has been expanded. In a shocking revelation in a WikiLeaks document, biometric data of Pakistani citizens from NADRA was provided to Government Communications Headquarters (GCHQ) and National Security Agency (NSA) to investigate “terrorists” (Digital Rights Foundation, 2022). In 2016 and 2018, various ‘safe city projects’ were launched in Islamabad and Lahore, respectively. These projects were part of the China-Pakistan Economic Corridor (CEPC) which ushered in a new wave of collaboration between the two countries. The safe city projects were built on a loan from the Export–Import Bank of China and featured a collaboration between Huawei, National Engineering Services Pakistan (NESPAK) and Arup which installed mass surveillance devices to track criminal activities but also record citizens’ movements via cameras, vehicle number plate tracking, tracing telecommunication communication, drone footage, facial recognition software, etc. (Ahmed, 2021).
Again, while these efforts are showcased as means to curb crime, there has been little proof of this. For instance, in Islamabad, the crime rate rose by 33 percent in 2016, a year after the system was implemented and the country’s national crime rate rose by 11 percent by 2018 (Hillman and McCalpin, 2019). The surveillance system aids the state in mass monitoring citizen activities which often targets political and social opposition from both political and non-political resistance groups.
In addition to laws and technologies to aid cyber governance, the state has showcased a history of blocking internet access to maintain “law and order” since the early 2010s. The PTA has been the manager of this domain where it often restricts internet access at certain times and in specific regions. One of the most frequent justifications for this action is curbing terrorism. For instance, during religious gatherings (e.g., Ashura for the Shi’as) and political demonstrations, internet shutdowns have become a norm in the main law and order (Kamran, 2017). These shutdowns are quite often targeted to remove the spread of information regarding political opposition. While in power, the Pakistan Tehreek-i-Insaf (PTI) used the same mechanism to curb online coverage rallies by its political opposition the Pakistan Muslim League- Nawaz (PLM-N), now out of power, PTI gatherings in Lahore are victim to internet blackouts in the same manner (Raza, 2023).
Examples of Digital Authoritarianism
Photo: Aleksandar Malivuk.
One of the most prominent examples of digital authoritarianism in Pakistan is showcased via its banning and blocking of content on the internet. As discussed, the most prominent reason for this gaging is the need to protect people from blasphemous or false information. YouTube was banned between 2012-2016 in the country when a video surfaced mocking Prophet Muhammad (Wilkers, 2016). Similarly, TikTok was also banned on two separate occasions, in 2020 and 2021, for “immorality and obscenity in the country” for a few days each time (Masood, 2020). PTA has also banned Twitter several times over the last ten years for various periods in years 2012 and 2021 and all times it was banned because of the spread of sacrilegious content (Verma, 2021; Reuters, 2012).
In addition to gaging websites, internet blackouts are a routine procedure. Historically internet shutdowns were usually put in place to stop terrorist activities on days of religious significance when people gathered in mass such as the processions at Ashura, rallies of Eid Milad-un-Nabi, (Prophet Muhammad’s birthday) or events where people gathered for mass payers such as Eid-ul-Fitr and Eid-ul-Adha. However, these have now expanded to the government using these bans to target the opposition. For instance, in 2021 former Prime Minister Nawaz Sharif was invited to give a talk at an event called Asma Jahangir Conference. Due to a self-imposed exile, Sharif took part in the event via an online address which was blacked out via a targeted internet shutdown since the politician voiced his discontent with the establishment and the then-ruling PTI-led government (The News, 2021a). Conversely, in 2023, with PTI out of power, the former opposition formed an alliance government and in May 2023 Imran Khan was arrested for not appearing in several court cases. After this arrest mass protests by PTI supporters sprang across major cities in the provinces of Punjab and KP (Mao, 2023). This led to a blanket internet shutdown to curb protests for over four days (Mao, 2023). In addition, internet blockage is quite a routine matter in Western Pakistan in regions of Swat, FATA, adjoining areas, and parts of Baluchistan where military security forces regularly clash with terrorist groups ranging from separatist groups to jihadist factions (Yilmaz and Saleem, 2022).
Internet surveillance has also peaked in Pakistan and the Pakistan military has been the major stakeholder involved in this process. In 2021, a bill was passed ensuring anyone who abused the military could face jail time and hefty fines (Abbasi, 2021). This bill has been instrumental in expanding surveillance on “anti-state” activities and punishing the accused. In May 2023, PTI protesters led to the rioting of public property, which resulted in the Prime Minster promising that “all technology available” would be used to punish vandals or some Ministers have been calling them “terrorists” (Sharif, 2023). Similarly, after the unrest calmed down, various videos have surfaced showcasing security forces and agencies using surveillance data to target peaceful protestors as well (Haq, 2022).
Furthermore, the use of technology for national security purposes has also been employed to suppress dissent, creating another dimension to the issue. The state’s overwhelming focus on national security, particularly in countering terrorism, has resulted in neglecting its responsibilities under domestic laws, as well as international agreements like the International Covenant on Civil and Political Rights and the Convention against Torture. Despite frequent incidents of data breaches and scandals involving the unauthorized release of audio and video recordings of influential political figures, judges, and journalists, there are no laws in place to safeguard against the collection of personal data and protect privacy. Civil society organizations in Pakistan have expressed concerns regarding the increasing surveillance of both the public and specific individuals such as journalists, politicians, and human rights activists (PI, 2015). They view these measures as infringements on the right to privacy. Intelligence agencies like the FIA (Federal Investigation Agency) and Inter-Services Intelligence (ISI), along with other authorities overseeing safe city projects, have enhanced their surveillance capabilities by establishing social media monitoring cells (Ahmed, 2021; Azeem, 2019; Yousafzai, 2023). While legal provisions permit digital surveillance for counterterrorism purposes such as blocking hate speech content, it appears that the state is utilizing its expanded surveillance capacity to suppress dissent (Aziz, 2022; Rehman, 2020).
Safe cities employ video cameras and other digital technologies to monitor and identify suspicious activities. Although safe cities encompass various ICT capabilities used in urban areas, the concept of ‘Smart Cities’ goes beyond that of ‘Safe Cities.’ The notion of Smart Cities involves providing internet connectivity and may progress to include electronic payment options for essential services and AI-controlled monitoring devices. Smart cities utilize technologies like high-speed communication networks, sensors, and mobile apps to enhance service delivery, improve mobility and connectivity, stimulate the digital economy, and overall enhance the well-being of citizens (Muggah, 2021; Goulding, 2019). To achieve this, vast amounts of data are leveraged to optimize various city functions, such as utilities, services, traffic management, and pollution control. The rapid expansion of smart city infrastructures globally has sparked controversy due to concerns over the widespread collection, retention, and manipulation of personal data by entities ranging from law enforcement agencies to private enterprises.
In Pakistan, successive administrations have collaborated closely with China to develop secure city infrastructure across urban areas. The Punjab Safe Cities Authority (PSCA), headquartered in Lahore, is a well-known initiative in this regard. With over 6,000 cameras and sensors installed at more than 1,500 locations in Lahore, the Punjab Police, with assistance from the PSCA, can manage traffic, combat crime, and respond to emergencies (Malik, 2022). Notably, Huawei from China has been responsible for constructing all secure city systems in Pakistan. The first safe city system in Islamabad was completed in 2016 through collaboration between the National Database and Registration Authority (NADRA) of Pakistan and Huawei, with funding from China’s EX-IM Bank (Hong, 2022). Another safe city system was established in Lahore in 2018, with Huawei leading the construction and National Engineering Services Pakistan (NESPAK) and UK-based multinational firm Arup providing consultancy and technical support (Ahmed, 2021).
The safe city infrastructure gathers information across several categories, including personal data, vehicle and traffic data, criminal profiles, crime statistics, and parking information. Given the past instances of data breaches within the NADRA database, experts have raised concerns about data security risks. In 2019, several CCTV camera images from Lahore were posted online, featuring inappropriate sexual content (Azeem, 2019). Pakistan’s safe city surveillance systems incorporate facial recognition, artificial intelligence, vehicle number plate tracking, dedicated telecommunication networks, data centers, drones, mobile applications, and intelligent transportation systems.
The effectiveness of Huawei’s safe city infrastructure in reducing urban crime has been subject to debate. Huawei has claimed in a questionable presentation that its safe city solutions significantly reduce crime, increase case clearance rates, shorten emergency response times, and enhance citizen satisfaction. However, investigations by the Center for Strategic and International Studies (CSIS) have indicated that these claims have been greatly exaggerated, if not entirely fabricated (Hillman and McCalpin, 2019). In Islamabad, the crime rate continues to grow and there was an increase of 141.2 percent recorded from 2021 to 2022 (Azeem, 2022). Participants in research studies have expressed skepticism, stating that they have not witnessed any positive outcomes or reduction in crime rates because of the safe city projects. A local journalist shared the following views: “For example, in Islamabad, we see that more than 2,200 cameras are installed in only one city. But if we talk about Lahore city there are more than 6,000 cameras installed. They enable the government to monitor the movement of people. They claim that they have installed them to control the law and security situation in cities and to control the crime rate in Pakistan, but we have not seen any positive outcome in that regard through a reduction in the crime rate” (Baloch, 2022).
Despite the state’s justification that safe city projects primarily serve counterterrorism efforts, it is evident that surveillance technology is being selectively employed. While it is used to counter terrorism and publicly release videos of terrorists involved in major attacks, such as the one in Peshawar in 2023, it is also increasingly utilized to target individuals critical of the government, its officials, and state institutions like the army (Gul, 2022). Examples have emerged of facial recognition technology being used to track down and apprehend individuals who verbally attacked government figures (Nadeem, 2022). Numerous cases have been documented where people have been detained by authorities for posting critical comments on social media. In these instances, individuals are subjected to torture and coerced into making public apologies, with videos of their apologies subsequently released on social media platforms (Dawn, 2022a).
The level of surveillance implemented in Pakistan is linked to an authoritarian approach. Surveillance capabilities are being employed for political purposes rather than solely for the defense of the country or public good. Recorded videos obtained through surveillance serve as leverage for those working behind the scenes, allowing them to exert control by capturing and disseminating compromising material (Khan, 2023; Dawn, 2022b). The timing of the video releases is crucial. Detailed records are maintained on important politicians, indicating a potentially illegal and unconstitutional practice that is incompatible with a democratic society. The impact of these authoritarian measures is evident, as journalists increasingly practice self-censorship and exercise caution in their smartphone usage. Awareness of traceability and concerns over the hacking of email and social media accounts have led to heightened vigilance among social media activists, journalists, and political leaders. However, despite the challenges, Pakistanis continue to find ways to express their opinions, often resorting to satire as a means of circumventing restrictions. Notable media personalities, such as Anwar Maqsood, have managed to avoid trouble by indirectly criticizing state institutions.
The judiciary in Pakistan has been a significant source of resistance against the growing digital authoritarianism and digital control measures implemented by the state. This ongoing process involves various legal cases under the PECA, the authority of institutions like the Federal Investigation Agency (FIA), and concerns related to data protection and privacy. The PECA Amendment of 2022, which primarily aims to criminalize defamation and make it a non-bailable offence, has faced critical scrutiny from local courts. Human Rights Watch has pointed out that expanding PECA’s already extensive provisions on criminal defamation to online statements about government institutions violates Pakistan’s international obligations. Media organizations in Pakistan challenged the PECA Amendment in the Islamabad High Court, where Justice Athar Minallah declared the new legal provisions a violation of freedom of speech as guaranteed by Article 19 of the Constitution of Pakistan (Naseer, 2022). The court also instructed the interior ministry to investigate the conduct of the FIA’s Cyber Crime Wing due to concerns of power abuse and infringement of individuals’ fundamental rights. Justice Minallah emphasized that no one should fear criticism, particularly in relation to defamation and concerns raised by public officeholders regarding social media attacks. As a result, the FIA closed nearly 7,000 cases, primarily related to defamation.
Civil Society Activism for Digital Rights in Pakistan
In many ways, there are still not enough laws in Pakistan to deal with digital rights, but the pressure is growing on policymakers to pay attention to the issues of privacy and data protection. This is mainly because Pakistan is home to a strong network of civil society organizations that also work closely with relevant international organizations to raise awareness on issues relevant issues, i.e., digital rights. There are several organizations in Pakistan that work for digital rights and strive to protect internet freedom, and privacy, and promote digital literacy. Let us look at some of the prominent organizations in this space in Pakistan. Digital Rights Foundation (DRF) is a non-profit organization that focuses on the advocacy of digital rights in Pakistan. They work on various issues, including online harassment, data protection, freedom of expression, and women’s digital rights. DRF conducts research, provides legal assistance, and offers digital security training and awareness programs.
Bolo Bhi is a civil society organization that advocates for open access to information, digital security, and internet freedom in Pakistan. They engage in policy advocacy, conduct research, and provide digital literacy training. Bolo Bhi also works to raise awareness about online censorship, surveillance, and privacy issues.
Media Matters for Democracy (MMfD) is a non-profit organization that focuses on media development, digital rights, and freedom of expression in Pakistan. They work towards promoting online civic spaces, digital literacy, and defending digital rights through research, policy advocacy, and capacity-building programs.
Internet Policy Observatory Pakistan (iPOP) is a research-based initiative that aims to monitor and analyze internet governance issues in Pakistan. They conduct policy research, produce reports, and provide recommendations on topics such as data protection, surveillance, and censorship. iPOP also engages in advocacy efforts to promote a free and open internet.
Although not based in Pakistan, Privacy International is a global organization that advocates for privacy rights and challenges surveillance practices worldwide. They work with local partners and provide support in the context of Pakistan to raise awareness, carry out research, and advocate for stronger privacy protections. These organizations actively engage with policymakers, raise public awareness, and work towards protecting digital rights in Pakistan through research, advocacy, and capacity-building activities.
Internet Access
Internet connection in Pakistan. llustration Contributor: AlexLMX
With the proliferation of the internet worldwide, several civil society organizations have dedicated their efforts to shed light on the significant issue of inadequate internet access within Pakistan. These organizations aim to amplify the voice of society, urging the government to invest in improving internet access. In this vein, Bytes for All, Pakistan (B4A), is a well-known digital rights organization, that seeks to secure digital rights and freedom of expression for civil liberties. In the end, they organize seminars, workshop training and produce various publications. For example, B4A has published annual reports on internet access in Pakistan (Haque, 2023). The 2022 report shows that there has been some progress in terms of internet access in Pakistan, but the country is still behind many Asian countries. One key finding of the report reveals that despite increased internet penetration, around 15 percent of the population remains without any access, while others face challenges such as slow speeds and inconsistent service, hindering meaningful internet access (Haque, 2023: 5). Pakistan ranks 118th in mobile broadband and 150th in fixed broadband, as per the B4A report (Haque, 2023: 9). The organization also raises concerns about the government’s attempts to restrict the internet and control cyberspace, including filing cases against journalists, activists, and political opponents for expressing unfavorable views on social media and proposing stricter defamation laws to counter dissent. To enhance internet access in Pakistan, B4A provides several important recommendations. These include recognizing fixed broadband as critical infrastructure and developing a national broadband strategy with a fiber plan. Additionally, improving the investment climate and financing options within the digital ecosystem and streamlining government administration are identified as essential actions for expediting implementation.
Media Matters for Democracy (MMfD) is another Pakistani organization focused on media literacy, digital democracy, progressive media, and internet regulation. They also work on integrating digital media and journalism technologies and creating sustainable initiatives in the media-tech sector. They provide several online free courses in different subjects. For example, their course “understanding citizen journalism” includes 54 lessons and “Digital Disinformation and Journalistic Responsibilities” encompasses 82 lessons (Arsalan, 2023; Khan, Mindeel and Shaukat, 2023). Also, this organization publish research investigations and policy papers. In one of their comprehensive reports, titled “Connecting the disconnected: mapping in digital access in Pakistan,” MMfD highlights that approximately 52.79 percent of Pakistan’s population, equivalent to 116 million people, has access to some form of internet (Kamra et al., 2022: 7). However, the report suggests that despite high tele density indicating cellular service connectivity for nearly 88 percent of the population, there remains a significant gap in internet access, particularly in mobile and broadband services across most parts of the country (Kamra et al., 2022: 16). Accordingly, the number of broadband subscribers stands at 116 million, 3G/4G mobile internet subscriptions at 113 million, and basic telephon subscribers at 2 million, representing only 1.14 percent of the total population (Kamra et al., 2022: 17). This reveals that over 47 percent of the population remains disconnected from the internet (Kamra et al., 2022: 25).
The report stresses that various factors contribute to this gap, with disparities evident between urban and rural areas. The available data does not offer a breakdown based on rural/urban or gender demographics, which are significant barriers to internet connectivity. They also argue that the COVID-19 lockdown further exacerbated these disparities, with individuals in peripheral and rural areas facing challenges due to limited infrastructure, while low-income communities struggled to afford smartphones and internet connections. The organization advocates for ensuring that human and social justice values drive technical development and use in Pakistan by providing some key recommendations. They emphasize the need for policies and regulations related to internet access to follow a rights-respecting model. Also, it is underlined that a core focus should be bridging the digital divide across class, gender, age, and geography as well as increasing digital literacy. In addition, they urge the government to make the Internet economy inclusive, address the need for online social norms, and empower individuals to shape their futures. Finally, the report emphasizes that building robust, secure, and resilient networks is crucial (Kamra et al., 2022).
Moreover, the efforts of civil society organizations to advocate for internet access are evident in various initiatives. One significant area of concern raised by these organizations is the Citizens Protection (Against Online Harm) Rules, issued in 2020. In terms of the obstacles of this law to internet access, a report published by DRF argued that these rules violate fundamental and constitutional rights, particularly Articles 14 and 19. The analysis emphasizes that these regulations impede the free movement of data, creating artificial barriers to information sharing and hindering global communication. Additionally, they exacerbate the lack of accessibility and affordability of internet connectivity for individuals and businesses. This issue is particularly detrimental as reducing connectivity costs is vital for expanding economic opportunities, promoting the digital economy, and generating wealth in Pakistan (DRF, 2020b).
Bolo Bhi, another digital rights organization, has also expressed concerns about the Citizen Protection laws, highlighting their attempt to gain jurisdiction over social networking platforms and access data. Their objective extends beyond content restriction to encompass accessing communication content and filtering technology. Bolo Bhi points out aspirations to establish local offices and data servers for unrestricted data access, which has been a recurring theme in previous attempts (Bolo Bhi, 2020).
The Human Rights Commission of Pakistan (HRCP), the country’s leading independent human rights body, advocates for internet access and freedom of expression as fundamental human rights in their reports. In a collaborative study titled ‘Freedom of Peaceful Assembly in Pakistan: A Legislative Review,’ released in partnership with the International Federation for Human Rights (FIDH) in March 2022, the HRCP called for a reassessment of the existing legislative framework, which still reflects policing strategies from the colonial era. Regarding internet access, the report proposes granting unrestricted media and digital access during assemblies, promoting freedom of speech and movement, rather than imposing content-based restrictions or blocking routes (HRCP, 2020).
It should also be noted that addressing the significant digital divide in Pakistani society is one of the key challenges in internet access. While limited access to technology is commonly associated with the digital divide, factors such as poverty, illiteracy, lack of computer literacy, and language barriers contribute to this issue in Pakistan. In response, the Internet Policy Observatory Pakistan (iPOP) takes concrete actions beyond workshops and reports. According to their website, they provide computers, communication equipment, software, and training to tackle the digital divide. The organization reports that most low-income households in the country find themselves on the disadvantaged side of the digital and knowledge divide. Consequently, their ability to participate effectively in the knowledge society remains significantly underdeveloped and underutilized. This situation puts these households at risk of further marginalization in a knowledge-driven society, where access to and utilization of information technology are just a fraction of the broader challenges they face (IPOP, 2023).
By and large, civil society organizations play a crucial role in advocating for improved internet access and reducing the digital divide in Pakistan. These organizations act as catalysts for change by advocating for policies and initiatives that promote equitable access to technology and bridge the gap between different segments of society. As discussed above, civil society organizations raise awareness about the importance of internet access as a fundamental right and a driver of socio-economic development. They highlight the disparities in access and the barriers faced by marginalized communities, such as low-income households, women, and rural populations. By bringing these issues to the forefront, civil society organizations can create a sense of urgency among policymakers and stakeholders to address the digital divide and make internet access more inclusive.
Moreover, civil society organizations actively engage in research, advocacy, and capacity-building activities to promote digital literacy and skills development. They organize workshops, training programs, and awareness campaigns to empower individuals with the necessary knowledge and tools to navigate the digital landscape. By enhancing digital literacy, these organizations enable individuals to fully participate in the digital age, access online opportunities, and leverage technology for personal and professional growth.
Eventually, civil society organizations play a critical role in monitoring and influencing policy development and implementation. They provide expert analysis, recommendations, and feedback on laws, regulations, and initiatives related to internet access and digital rights. Through their engagement with government agencies, regulatory bodies, and other stakeholders, these organizations attempt to ensure that policies are inclusive, rights-based, and responsive to the needs of diverse communities.
Privacy
Privacy is an essential aspect of individuals’ rights, encompassing their ability to maintain control over personal information, safeguard it from unauthorized access, and prevent unwanted intrusions. In Pakistan, the right to privacy is constitutionally protected under Article 14, which upholds individuals’ dignity and personal autonomy. However, despite this recognition, several challenges hinder people in Pakistan from effectively protecting their privacy, particularly in cyberspace.
One key challenge is the limited digital literacy among most of the population. In response, civil society organizations play a crucial role in educating the public through campaigns, seminars, research publications, policy reports, workshops, and awareness programs. For example, DRF has published a report, titled “Young People and Privacy in Online Space”, which aims to raise concern about the privacy of youth in cyberspace (DRF, 2021b). The report acknowledges that despite the ongoing increase in the number of young people users on the internet, and particularly social media, they face insufficient protection and have limited awareness of their privacy rights. The organization suggests that young generations recognize the gendered nature of online harm, particularly impacting women. Therefore, the report emphasizes that it is crucial to foster collaboration to enhance legal frameworks and establish effective mechanisms to safeguard young people’s rights. DRF has also published privacy-related reports that provide up-to-date information regarding digital privacy. They include ‘How to keep your social media secure and anonymous,’ ‘Understand cyber-harassment,’ ‘What to do when there is a privacy breach?’, ‘Protect against viruses and malware’ and ‘Two-factor authentication’ (DRF, 2020a).
Another privacy concern in Pakistan stems from the government surveillance system, which has advanced in recent years. In this vein, civil society organizations and activists in Pakistan have been advocating for stronger digital privacy protections. They have called for greater transparency in government surveillance activities, improvements in data protection practices, and the need for comprehensive privacy legislation aligned with international standards. In 2019, Bolo Bhi raised concerns about the Web Monitoring System (WMS) deployed by the Pakistan Telecommunications Authority (PTA). The WMS aims to monitor and control internet traffic for commercial and security purposes. However, the organization underlined that the lack of safeguards and judicial oversight raises concerns about the potential misuse of surveillance capabilities (BoloBhi, 2019). Bolo Bhi urged the government to take concrete steps to demonstrate the veracity and reliability of its claims that the WMS will not restrict internet freedom. Moreover, the director of this civil society organization suggested that transparency regarding the technology provider, Sandvine Inc, and its security audit is crucial. Public accountability and corporate responsibility should be upheld to align with international principles of human rights, freedom of expression, and privacy (BoloBhi 2019).
Digital Rights Monitor, a project under MMfD, has attempted to contribute to improving digital privacy in Pakistan. They have produced a series of videos, titled ‘Privacy-in-Law: Legal Framework of Digital Privacy Laws in Pakistan’ (Kamran, 2019). These videos provide information about the enacted laws that protect citizens’ privacy and assess their implementation in Pakistan. The videos cover important legislation such as the ‘NADRA Ordinance, 2000,’ ‘The Investigation for Fair Trial Act, 2013,’ ‘The Pakistan Telecommunication (Re-Organization) Act, 1996,’ and the ‘Prevention of Electronic Crimes Act 2016 (PECA).’ They seek to uncover the details of the laws that are aimed at framing data security regulations, regulating law enforcement and intelligence agencies’ power to investigate criminal cases, and countering increasing crime originating from cyberspace.
Bytes for All (B4A) has also been active in highlighting the importance of privacy in the virtual world. In 2020, the organization published a report titled, ‘The Scope of Privacy Commission in Pakistan,’ which strongly advocated for the establishment of an independent and autonomous Privacy Commission free from political or executive influence (Raza and Baloch, 2020). This commission is deemed essential for protecting citizens’ digital data and providing redressal for privacy-related violations. B4A has also conducted personal training sessions on digital privacy and raised public awareness by addressing topics such as the ‘Dangers of Digital Surveillance’ (Raza and Baloch, 2020). To enhance online privacy in Pakistan, digital rights advocates in this organization, have put forth several recommendations for the government to consider. These recommendations can be summarized as follows (Baloch and Qammar, 2020):
– Revise laws to limit intelligence agencies’ powers in intercepting digital communications and private data of journalists and human rights defenders.
– Define clear criteria for digital surveillance in the context of national security and counterterrorism.
– Cease mass digital surveillance on citizens.
– Promote encrypted communications for the safety of vulnerable groups.
– Include secure communications training in public sector education, especially in journalism and law.
– Respect citizens’ right to privacy, especially journalists and human rights defenders, to strengthen democracy, freedom of speech, and information access.
Civil society organizations actively participate in policy discussions and provide valuable input during the development of privacy-related laws and regulations. They bring the perspectives and concerns of the public to the attention of policymakers, advocating for privacy-focused policies that strike a balance between security and individual rights. Their involvement aims to assess to what extent the government measures align with the principles of transparency, accountability, and respect for privacy. In Pakistan, with the new wave of internet penetration, particularly among young generations, the effort of civil society organizations is essential for fostering a privacy-conscious society and holding governments accountable for protecting individuals’ digital privacy rights. Through their persistent advocacy, these organizations can contribute to a more informed and balanced policy-making process. They provide expertise and recommendations based on research and analysis, offering practical solutions that protect privacy rights while addressing security challenges. Their efforts underscore the importance of privacy as a fundamental right, even in the face of increasing surveillance measures.
Data Protection
Illustration Contributor: PX Media.
Data protection entails safeguarding personal information against unauthorized access, use, or disclosure. It encompasses obtaining consent, employing data for specific purposes, minimizing data collection, ensuring accuracy, implementing security measures, respecting individual rights, and safeguarding data during transfers. Upholding privacy and cultivating trust with individuals is both a legal and ethical obligation. While data protection and privacy are closely related, they carry distinct meanings. Data protection focuses on safeguarding personal information, whereas privacy centers on maintaining control over one’s personal life and information. Data protection ensures the secure handling of data, while privacy encompasses broader aspects of personal autonomy and limiting unwarranted intrusion.
Currently, Pakistan lacks comprehensive legislation specifically governing the processing of personal data. However, like the privacy domain, the Prevention of Electronic Crimes Act, 2016 (PECA) serves as a legal framework to address electronic crimes and unauthorized access to personal data. Under PECA, the Ministry of Information Technology and Telecommunications (MOITT) has established the Removal and Blocking of Unlawful Online Content Rules 2021, granting the Pakistan Telecommunication Authority (PTA) the authority to remove or block access to information systems (Rehman, 2022). The Personal Data Protection Bill 2021, which is awaiting enactment, will become the primary legislation regulating the processing of personal data in Pakistan. It will apply to individuals and entities that control, process, or authorize the processing of personal data within the country.
Digital rights organizations have actively campaigned for data protection in Pakistan. The Digital Rights Foundation (DRF), for instance, has been proactive in providing feedback on the Personal Data Protection Bill (PDPB). They have submitted various reports to the government to enhance the bill to align with international standards. The organization, Digital Rights Foundation (DRF), has identified several persistent issues in the bill since 2018 that must be addressed to align with global data protection standards and privacy rights. According to DRF, concerns have been raised regarding the broad powers granted to the Federal Government, which could lead to self-interested interpretation and evasion of regulation. They have also expressed concerns about the lack of independence of the National Commission for Personal Data Protection (NCPDP), as it remains under the administrative control of the Federal Government, compromising its autonomy and failing to meet international standards (DRF, 2021a).
DRF has stressed that the requirement for ‘critical personal data’ to be processed within Pakistani servers is impractical and akin to data localization, which could hinder business operations and investment. Ambiguities exist in terms like ‘national interest’ and ‘national security’ without clear definitions, granting the government wide discretion in implementing the law. DRF highlights that the bill also lacks provisions addressing emerging technologies such as automated decision-making and artificial intelligence, necessitating further elaboration and the inclusion of non-discrimination safeguards. DRF emphasizes the need for specific language, defined terms, and adequate safeguards to ensure that the law aligns with legislative intent and effectively protects digital rights.
In addition, B4A Pakistan has published at least 13 comprehensive reports on data protection in Pakistan. These reports encompass various aspects, including submissions to the government for consultation and the creation of training materials. One of their reports, titled ‘Electronic Data Protection in Pakistan,’ provides a thorough analysis of the country’s data protection status and offers key recommendations (Gilani et al., 2017). B4A highlights the concerning absence of data protection legislation in Pakistan, particularly given the increasing volume of citizens’ data being processed daily. Urgent action is required to establish clear and effective data protection laws that meet the demands of the digital era. Failure to do so may lead foreign companies to perceive Pakistan as an unsafe business environment, deterring them from outsourcing their services to the country. B4A provides several recommendations to address these concerns, including (Gilani et al., 2017):
– Amendment to PTA is necessary. The Protection of Privacy Act (PTA) of Pakistan is incompatible with Article 17 of the ICCPR.
– There is an urgent need for an independent authority to oversee data protection compliance.
– A system of accountability for data breaches should be established.
– The Electronic Data Protection Bill of 2005 is not fit for purpose.
– Pakistan should investigate adopting data protection legislation similar to the GDPR.
– Education of citizens about personal data and its value is urgently needed.
– The principle of individual consent for processing data should be included in any new legislation.
– The use of data anonymization mechanisms should be strongly encouraged.
Furthermore, Bolo Bhi has allocated a dedicated section on its website to address issues concerning data protection. The organization actively publishes research-based reports to advocate for the implementation of enhanced legislation in the field of data protection. In one of their reports, they conducted a comparative analysis between the draft Personal Data Protection Bill 2020 in Pakistan and similar laws such as the GDPR, the Malaysian Personal Data Protection Act 2010, the UK’s Data Protection Act (DPA) 2018, and India’s Personal Data Protection Bill 2019 (Shahani, 2020). The comparison revealed several shortcomings in the draft Bill proposed in Pakistan, including:
– The Authority set up under the draft Bill lacks independence and autonomy.
– The exemptions to the prohibition of processing of ‘personal data’ including ‘sensitive personal data’ are too broad.
– The Bill does not cover intelligence agencies’ collection, storage, and use of data.
Overall, despite Pakistan’s increasing participation in the digital sphere, the government must move quickly to prioritize data protection due to the country’s rapidly expanding online population. In fact, as Bolo Bhi urged, Pakistan should take note of what other developed nations have to say. The government can take the required actions to strengthen data protection safeguards and ensure the privacy and security of its citizens’ personal information by taking note of successful practices already in place abroad. By enacting effective policies and regulations that adhere to international standards, Pakistan must give priority to the rights and well-being of its citizens in the digital sphere.
As a final point regarding the role of civil society organizations in Pakistan in promoting digital rights, internet access, privacy, and data protection, it should be emphasized that they tirelessly raise awareness about these important issues, attempt to facilitate fruitful dialogue between citizens and policymakers, and actively work towards holding those responsible accountable. Through their diligent work, they hope to greatly contribute to the creation of efficient laws and procedures that uphold the rights of people and promote a safe and welcoming online environment for everyone. To influence decision-makers to meet the requirements of the populace, these organizations offer insightful research-based studies, policy suggestions, workshops, seminars, online and offline training sessions, and periodical audits of internet legislation and privacy rules. Or to put it another way, they try to help.
Conclusion
Pakistan’s historical trajectory has been marked by a series of challenges, including violence, religious divisions, and an ongoing struggle to define its national identity. These factors have significantly shaped the current political landscape of the country. Despite its aspirations to establish a stable democracy, Pakistan has faced recurring periods of military rule, which have undermined democratic processes and institutions.
The governance challenges in Pakistan include limitations on press freedom, restrictions on the right to protest, and interference from the military establishment. These issues have raised concerns about the strength and integrity of Pakistan’s democratic system. Furthermore, the military’s influence has often overshadowed civilian governance, leading to complex power dynamics within the country.
In recent years, Pakistan has witnessed the emergence of digital authoritarianism as a governing strategy. This involves using digital technologies and surveillance mechanisms to control and monitor online activities. The government has implemented legislation like the Prevention of Electronic Crimes Act (PECA) to regulate cyberspace. However, the vague definitions of cybercrime within PECA and the broad surveillance powers granted to agencies such as the FIA and ISI raise apprehensions about potential abuses of power.
To enforce digital authoritarianism, the state has invested in advanced technological capabilities for monitoring online communications. This includes the acquisition of web monitoring systems and the establishment of social media monitoring cells. These measures aim to consolidate the state’s control over cyberspace and curtail citizens’ digital privacy.
Nevertheless, Pakistan’s democratic fabric is not entirely eroded. In addition to push back from the judiciary, Pakistan has a strong civil society and there are various human rights organizations, including the ones that exclusively focus on digital rights. Human rights organizations, including those specifically focused on digital rights, play a crucial role in advocating for the protection of digital freedoms in Pakistan. These organizations voice concerns about the need for stronger legislation on data protection and privacy and advocate for equitable access to the internet, especially for marginalized communities in remote regions like ex-FATA and Balochistan.
By highlighting these concerns and advocating for digital rights, civil society organizations and the judiciary serve as important checks and balances against the encroachment of digital authoritarianism. Their efforts contribute to promoting transparency, accountability, and respect for individual rights in the digital sphere, despite the challenges posed by the current political landscape in Pakistan.
Funding: This research was funded by Gerda Henkel Foundation, AZ 01/TG/21, Emerging Digital Technologies and the Future of Democracy in the Muslim World.
(*) Dr Zahid Shahab Ahmed is a Senior Research Fellow at Alfred Deakin Institute for Citizenship and Globalization, Deakin University, Australia. He is also a Non-Resident Research Fellow at the Institute of South Asian Studies, National University of Singapore. During 2017-19, Dr Ahmed was a Non-Resident Research Fellow with the University of Southern California’s Center on Public Diplomacy. During 2013-16, he was an Assistant Professor at the Centre for International Peace and Stability, National University of Sciences and Technology in Pakistan. His work focuses on political developments (e.g., democratization, authoritarianism and political Islam), foreign affairs, peace and security in South Asia and the Middle East. He has published extensively in leading journals, such as Politics and Religion, Democratization, Asian Studies Review, and Territory, Politics, Governance. He is the author of Regionalism and Regional Security in South Asia: The Role of SAARC (Routledge, 2013). He is a co-author of Iran’s Soft Power in Afghanistan and Pakistan (Edinburgh University Press, 2023). Email: zahid.ahmed@deakin.edu.au
(**) Shahram Akbarzadeh is Convenor of Middle East Studies Forum (MESF) and Deputy Director (International) of the Alfred Deakin Institute for Citizenship and Globalisation, Deakin University (Australia). He held a prestigious ARC Future Fellowship (2013-2016) on the Role of Islam in Iran’s Foreign Policy-making and recently completed a Qatar Foundation project on Sectarianism in the Middle East. Prof Akbarzadeh has an extensive publication record and has contributed to the public debate on the political processes in the Middle East, regional rivalry and Islamic militancy. In 2022 he joined Middle East Council on Global Affairs as a Non-resident Senior Fellow. Google Scholar profile: https://scholar.google.com.au/citations?hl=en&user=8p1PrpUAAAAJ&view_op=list_works Twitter: @S_AkbarzadehEmail: shahram.akbarzadeh@deakin.edu.au
(***) Dr Galib Bashirov is an associate research fellow at Alfred Deakin Institute for Citizenship and Globalization, Deakin University, Australia. His research examines state-society relations in the Muslim world and US foreign policy in the Middle East and Central Asia. His previous works have been published in Review of International Political Economy, Democratization, and Third World Quarterly. Google Scholar profile: https://scholar.google.com/citations?user=qOt3Zm4AAAAJ&hl=en&oi=aoEmail: galib.bashirov@deakin.edu.au
DRF. (2020b). Removal and Blocking of Unlawful Online Content (Procedure, Oversight and Safeguards) Rules, 2020: Legal Analysis. Lahore: Digital Rights Foundation.
DRF. (2021a). “Personal data protection bill 2021: Civil society submission to the ministry of information technology and telecommunication.” Digital Rights Foundation. https://digitalrightsfoundation.pk/wp-content/uploads/2021/09/PDPB-2021-Submission-by-DRF.pdf
DRF. (2021b). Young people and privacy in online space. Lahore: Digital Rights Foundation.
Hong, Caylee. (2022). “’Safe Cities’ in Pakistan: Knowledge Infrastructures, Urban Planning, and the Security State.” Antipode, 54 (5):1476-1496. doi: https://doi.org/10.1111/anti.12799
Hossain, Naomi. (2018). “The 1970 Bhola cyclone, nationalist politics, and the subsistence crisis contract in Bangladesh.” Disasters, 42 (1):187-203. doi: https://doi.org/10.1111/disa.12235
Hossain, Naomi. (2021). “The geopolitics of bare life in 1970s Bangladesh.” Third World Quarterly, 42 (11):2706-2723. doi: 10.1080/01436597.2021.1954902
HRCP. (2020). “Freedom of peaceful assembly in Pakistan: A legislation review.” Human Rights Commission of Pakistan. https://hrcp-web.org/hrcpweb/wp-content/uploads/2020/09/2022-Freedom-of-peaceful-assembly-in-Pakistan.pdf
Jadoon, Amira. (2021). The Evolution and Potential Resurgence of the Tehrik-i-Taliban Pakistan. Washington: United States Institute of Peace.
Kamra, Hija, Sadaf Kha, Salwa Ran, Zoya Rehma, and Maria Malik. (2022). Connecting the disconnected: mapping in digital access in Pakistan. Islamabad: Media Matters for Democracy.
Mehfooz, Musferah. (2021). “Religious Freedom in Pakistan: A Case Study of Religious Minorities.” Religions, 12 (1):51.
Menon, Jisha. (2012). The performance of nationalism: India, Pakistan, and the memory of partition. New York: Cambridge University Press.
Momen, Md Nurul, Harsha S. and Debobrata Das. (2021). “Mediated democracy and internet shutdown in India.” Journal of Information, Communication and Ethics in Society, 19 (2):222-235. doi: 10.1108/JICES-07-2020-0075
Shafqat, Saeed. (2019). “Pakistan Military: Sustaining Hegemony and Constructing Democracy?” Journal of South Asian and Middle Eastern Studies, 42 (2):20-51.
Sheikh, Md Ziaul Haque and Zahid Shahab Ahmed. (2020). “Military, Authoritarianism and Islam: A Comparative Analysis of Bangladesh and Pakistan.” Politics and Religion, 13 (2):333-360. doi: 10.1017/S1755048319000440
Talbot, Ian. (2009). “Partition of India: The Human Dimension.” Cultural and Social History, 6 (4):403-410. doi: 10.2752/147800409X466254.
Yilmaz, Ihsan. (2016). Muslim laws, politics and society in modern nation states: Dynamic legal pluralisms in England, Turkey and Pakistan. Abingdon: Routledge.
Yilmaz, Ihsan & Saleem, Raja Ali M. (2022). “The nexus of religious populism and digital authoritarianism in Pakistan.” Populism & Politics. European Center for Populism Studies (ECPS). December 2, 2022. https://doi.org/10.55271/pp0016
Yilmaz, Ihsan & Shakil, Kainat. (2022) “Religious Populism and Vigilantism: The Case of the Tehreek-e-Labbaik Pakistan.” Populism & Politics. January 23, 2022. European Center for Populism Studies (ECPS). https://doi.org/10.55271/pp0001
Yilmaz, Ihsan. (2023). Digital Authoritarianism and its Religious Legitimization – The Cases of Turkey, Indonesia, Malaysia, Pakistan, and India. Singapore: Palgrave Macmillan.
Yousafzai, Fawad. (2023). “Govt to restart Gwadar Safe City Project with escalated cost.” The Nation, 31 January. https://www.nation.com.pk/31-Jan-2023/govt-to-restart-gwadar-safe-city-project-with-escalated-cost.
Zajko, Mike. (2016). “Telecom Responsibilization: Internet Governance, Surveillance, and New Roles for Intermediaries.” Canadian Journal of Communication, 41 (1):75-93. doi: 10.22230/cjc.2016v41n1a2894