Partager cet article

New AI-generated information weapons pose problems

In 2025, the world faces a wide array of conflicts and threats. Geopolitical instability—from wars in Ukraine and the Middle East to tensions in the Sahel and the Indo-Pacific—unfolds against a backdrop of intensifying great-power competition, including in the cyberspace. China, the U.S., Europe, and Russia are vying for influence across regions and technologies, with artificial intelligence and emerging tech now central to this contest. Synthetic video and AI-generated content are now embedded in the digital landscape, shaping opinion and influencing political and social outcomes.

Cognitive warfare and persuasive technologies

Foreign Information Manipulation and Interference (FIMI) is a strategic form of cognitive warfare targeting entire societies’ perceptions, trust, and decision-making without crossing the threshold of armed conflict. Enabled by social media, encrypted platforms, and AI, these operations have become central to modern conflicts, eroding trust and deepening divisions. Autocratic states in particular weaponize the information space and exploit the openness of democracies, blending state-controlled media with covert networks designed to appear organic. Their operations spread across major platforms—X, Facebook, Telegram, YouTube, TikTok—using articles, videos, memes, AI-generated content, and “information laundering”.

Generative AI, in particular, is a “disinformation supercharger.” By producing realistic synthetic content, including deepfakes, it enables influence operations at scale. Recent examples include a Chinese campaign using a deepfake of Philippine President Ferdinand Marcos Jr to undermine the president and his policies, or Russian efforts deploying spoof websites impersonating legitimate Western media to influence critical parliamentary election in Moldova and weaken the pro-EU ruling party.

The rise of “persuasive technologies”—including neurotechnology and ambient systems—further accelerates these threats as such tools interact with the human mind and body in increasingly intimate ways, enabling the large-scale manipulation of cognition.

China and Russia’s Global Influence Playbook

Authoritarian regimes are the most active, with both state and non-state actors engaged in manipulation campaigns, making them difficult to counter.

Russia uses FIMI to destabilize democracies, erode trust, and amplify divisions at home and abroad. Its campaigns stretch from Europe to Africa and Latin America, where Kremlin-backed media such as RT and local amplifiers recycle anti-Western and neo-colonial tropes to justify its invasion of Ukraine and present Moscow as a defender of tradition and stability.

China, by contrast, seeks to project its worldview, defend its global image, and expand influence, particularly around Taiwan and the Indo-Pacific. Under Xi Jinping, its approach is more subtle than Russia’s—promoting nationalism, countering criticism of human rights abuses, and challenging Western dominance. Chinese officials openly describe generative AI as a tool of “cognitive warfare,” capable of predictive modeling and hyper-targeted persuasion. AI, neurotech, and ambient systems interact more directly with human cognition, raising unpredictable security risks.

China has built one of the world’s most controlled digital ecosystems, using its concept of “cyber sovereignty” to tightly police the domestic internet. Beijing is now exporting this model. As the world’s largest exporter of digital technologies, it provides 5G networks, surveillance tools, and AI systems, particularly to countries in Africa, the Middle East, and Southeast Asia. These tools not only expand Beijing’s global reach but also strengthen authoritarian regimes by enabling censorship and repression, normalizing an illiberal model of internet governance. Beijing’s data-centric authoritarianism promotes its surveillance systems abroad with subsidies and state-backed loans while training local security forces. At the same time, Chinese firms collect vast troves of data through surveillance networks and digital platforms, which can fuel espionage, foreign influence, and social control.

In both Russia and China’s cases, non-state actors also play a role, sometimes independently but often in blurred alignment with state agendas, especially in authoritarian systems where civil society and the private sector are tightly controlled. What unites these efforts is the growing use of AI, which enables actors to cheaply adapt, translate, and localize their narratives across multiple languages and regions, giving them unprecedented reach in shaping global opinion.

Authoritarian alignment in AI and cyberspace

A broader authoritarian network—involving China, Russia, Iran, and North Korea—collaborates across FIMI ecosystems. Over the past few years in particular, China and Russia have deepened their partnership, with presidents Xi Jinping and Vladimir Putin openly framing their cooperation as a driver of historic change. This alignment plays out in the global race over AI, where an “authoritarian advantage” scenario could see China and its allies exporting surveillance systems and expanding influence in Latin America, Africa and the Middle East.

This cooperation has been visible around major events such as NATO’s 2025 summit in The Hague and Russia’s war in Ukraine, where Chinese and Russian narratives reinforce each other to weaken Western cohesion. Their disinformation strategies are intertwined, often amplifying one another’s narratives through state-controlled media like RT, Sputnik, CGTN, and Global Times. Shared messaging seeks to undermine U.S. leadership, discredit NATO, and portray the West as hypocritical and neocolonial.

High-level pledges like the 2022 “no limits” partnership and the 2024 “new era” agreement signal long-term military and strategic coordination. At the same time, China is advancing global initiatives and leveraging platforms like BRICS to position itself as the leader of an alternative order, rallying authoritarian and developing states against the liberal international system. Through platforms like the World Internet Conference, the Global AI Governance Initiative, and the Digital Silk Road, China challenges democratic norms while embedding its influence abroad.

Responding to the threat, with or without the U.S.

FIMI must be treated as a security threat on par with military coercion. Authoritarian regimes have laid out their playbook; democracies must now craft a clear and decisive response. For NATO, the EU, and Western democracies, FIMI operations represent a strategic challenge as cognitive warfare shifts the contest from control of territory to control of belief and perception.

International forums increasingly address AI’s destabilizing role in the information space. The European Commission introduced the European Democracy Action Plan to safeguard elections, strengthen media, and counter disinformation. Intelligence communities developed frameworks such as MITRE ATT&CK, mapping cyber tactics, while civil society groups created open-source tools like DISARM to analyze malign influence operations.

For many years, the United States was at the forefront of countering information warfare and promoting democratic and digital resilience worldwide. Through consistent funding, institutional leadership, and diplomatic coordination, it played a pivotal role in shaping a collective response to authoritarian influence operations. Yet U.S. foreign policy is undergoing a tectonic shift under the Trump administration and U.S. leadership has waned.  Several key U.S. programs and institutions once dedicated to this effort have been dismantled or sidelined, creating a dangerous gap.

In this environment, Western democracies and their partners must act with urgency—strengthening cooperation and investing in shared strategic capacity to confront the mounting threat of cognitive warfare and disinformation campaigns.

As countries such as Canada raise defense spending, these resources must also be directed toward safeguarding our information ecosystems, countering digital authoritarianism, and strengthening democratic digital resilience. NATO remains the most powerful collective tool liberal democracies possess. Information warfare is warfare—our strategy must reflect that reality.

The very assets that distinguish open societies—independent media, civic space, technological innovation, and the rule of law—must be leveraged to disrupt and deter hostile information operations. This requires reinvestment in international media landscapes where authoritarian actors have gained ground, long-term support for civil society groups countering cognitive warfare, and deeper operational partnerships with the private sector and technology platforms. Democracies must also harness advanced AI not only to defend against foreign information manipulation but to proactively detect and neutralize adversarial influence campaigns across the digital battlefield. At stake are the integrity of our institutions and the resilience of our alliances.

Article rédigé par:

Chargée des affaires mondiales, Institut de sécurité globale de Montréal
Les opinions et les points de vue émis n’engagent que leurs auteurs et leurs autrices.

Ces articles pourraient vous intéresser