How do norms guide stakeholders in protecting critical infrastructure?
Below is a comic book that portrays a fictional story inspired by real events. It highlights the dilemmas that arise when roles and responsibilities among different actors – both domestic and cross-border – are not always clear. The story explores the challenges of addressing cyber risks to critical and critical information infrastructure (CI/CII) and minimising harm while navigating complex multi-stakeholder dynamics.
In this scenario, what actions should different actors have taken to address cyber risks to CI/CII and minimise harm? What guidance do the UN GGE norms offer to stakeholders? These are the questions we discuss in the Geneva Dialogue with the non-state stakeholder experts.
Critical infrastructure (CI) continues to be a consistently attractive target for threat actors and cyber espionage operations. 1Some of the recent cases reported only in 2024: https://dig.watch/updates/france-faces-unprecedented-cyberattacks-on-government-services; https://dig.watch/updates/new-zealand-accuses-china-linked-threat-actors-of-malicious-cyber-activity-targeting-parliament-in-2021 The vulnerability of critical infrastructure to cyberattacks remains a big concern for cyber defenders both within government and among non-state stakeholders. Despite UN Member States agreeing on cyber norms, with at least three directly focusing on critical infrastructure protection (CIP), many questions remain unanswered regarding strengthening the security of CI and enhancing predictability in this field for all involved actors.
Furthemore, the intensified competitive geopolitical context and the rise of interstate military conflicts, coupled with the rapid exploration and adoption of emerging technologies such as Artificial Intelligence (AI), have brought to light new cyber risks to CIP. These developments have also underscored the complex relationships between public and private actors in managing these risks.
How do approaches to define and protect CI change, in light of the transformative effects of the pandemic and the intensified geopolitical conflicts of the past two years? What guidance do the agreed UN GGE cyber norms and CBMs 2Confidence-building measures (CBMs) are measures designed to prevent misunderstandings and de-escalate tensions when relations among states concerning cyber/ICT security deteriorate. They act as a critical pressure valve to manage and reduce potential conflicts. CBMs emerged during the Cold War with the main goal to address military tensions between adversaries and later evolved as an important instrument to manage crisis situations in international relations. For further details, see <em>ICT4Peace Foundation, ‘Confidence Building Measures and International Cybersecurity’, 2013</em>, available at https://ict4peace.org/wp-content/uploads/2019/08/ICT4Peace-2013-Confidence-Building-Measure-And_Intern-Cybersecurity.pdf provide to actors in CIP?
From the very beginning, cyberspace has lacked clear delineations between legal concepts and technical systems, resulting in a high degree of interconnectivity and collaboration among actors and communities, leading to uncertainty in their relationships. However, this ambiguity has become even more evident as cyberspace has effectively become a battleground for conflicts, posing increasing risks for critical facilities and their users. The significance of aforementioned questions have been underscored during recent substantive sessions of the UN Open-ended Working Group, where several states continue to emphasise threats to CI and highlight the need for the development of practical guidelines for CIP. Civil society has echoed these concerns, emphasising the importance of understanding how cyberattacks on CI cause harm to people. This perspective urges a shift in the discussion from focusing solely on technical aspects to considering the broader societal impacts of such attacks and discussing what is considered as harmful in relation to a cyber incident. Meanwhile, academia has called for a global cyber CIP treaty ‘to make critical infrastructure a cyber attack–free zone and to develop a global accountability mechanism in cyberspace’.3Carnegie Endowment for International Peace, ‘Why the World Needs a New Cyber Treaty for Critical Infrastructure’, March 2024, available at https://carnegieendowment.org/research/2024/03/why-the-world-needs-a-new-cyber-treaty-for-critical-infrastructure?lang=en¢er=europe
The experts from the Geneva Dialogue (further referred to as ’experts’) emphasised the importance of discussing relevant CIP norms and highlighted specific challenges arising from the rapid development of information and communication technologies (ICTs). They noted that many critical infrastructure facilities are integrating digital components with legacy systems – some of which were not initially designed for digital use. While legacy systems that are not fully digital may enhance resiliency, the increasing interconnectivity of these systems introduces systemic vulnerabilities and significant cybersecurity risks. This interconnectivity, while potentially improving production efficiency, also comes with added demands for maintenance – not only of hardware, but also of the software used to manage these systems. As a result, some of the efficiency gains are offset by the increased need for software updates and maintenance, further complicating the cybersecurity landscape.
While states play a central role in shaping cyberspace security, they are far from the only actors responsible for its stability. The interconnected nature of modern digital infrastructure means that businesses, software developers, security researchers, and multinational corporations also have significant influence over cybersecurity outcomes. Their decisions – whether to disclose vulnerabilities, patch systems, or comply with regulations – directly impact global security.
Yet, just as states define ‘responsible behavior’ in ways that align with their strategic interests, non-state stakeholders operate under a mix of legal requirements, economic incentives, and corporate policies that shape their approach to responsibility. The discovery of a software vulnerability, for example, presents a company with competing pressures: should it follow national regulations, adhere to internal security policies, protect its shareholders, or act in the best interest of the global community? The answer is rarely straightforward.
Therefore, in 2024, the Geneva Dialogue initiated discussions on the implementation of agreed cyber norms and CBMs related to CIP. Building on the approach outlined in the 2023 Geneva Manual, the Geneva Dialogue continued to emphasise the role of non-state stakeholders in supporting states’ efforts to implement the agreed norms. Discussions centered on how these stakeholders understand and interpret the norms, the ways in which they can and are already implementing them, the incentives and barriers they encounter, and their expectations from governments. By highlighting the crucial role non-state stakeholders play in translating diplomatic agreements into practical actions, the Dialogue emphasises the importance of their involvement. While diplomatic agreements aim to build consensus on a set of actions or best practices, often from a non-CI operator perspective, they can sometimes overlook the context-specific nature of CIP. As such, non-state stakeholders – who possess valuable operational expertise – are often sidelined as mere observers in state-led normative processes, despite their critical role in addressing the nuances of cybersecurity risks.
Although this chapter focuses on non-state stakeholders, it is impossible to separate their actions from the role of governments. State policies on cybersecurity, vulnerability disclosure, and corporate governance shape how companies behave. Even when businesses act independently, they do so within the frameworks set by national regulations and geopolitical pressures. In this chapter, we explore how non-state stakeholders navigate these challenges, examining the factors that shape their decisions and how their ‘responsible behavior’ intersects with, and at times conflicts with, state interests.
Thus our goal is to offer valuable insights and to spotlight critical, often unanswered, questions that resonate across the cyber diplomacy and cybersecurity communities in both the public and private sectors, contributing to their efforts to reduce cyber risks. Practically, we seek to convey the key perspectives of non-state stakeholders on the implementation of agreed norms related to CIP (i.e. UN GGE norms F, G, and H) and to collect examples of effective practices in implementation.
UN GGE norms, DiploFoundation
This chapter should be read in conjunction with the first chapter, as the topics of supply chain security (UN GGE norm I), responsible vulnerability reporting (UN GGE norm J), and CIP (UN GGE norms F, G, and H) are interconnected and involve repeating roles and responsibilities for non-state stakeholders. Just as the UN GGE norms are intended to be read collectively rather than separately, the chapters of the Geneva Manual should be considered as a cohesive outcome of the 2023 and 2024 editions of the Geneva Dialogue.
Key messages: How do non-state stakeholders understand and interpret the implementation of the agreed CIP-related cyber norms and CBMs?
During regular consultations with experts, we explored the cyber norms and CBMs related to CIP. These discussions covered the implementation of these norms, varying expectations regarding the roles and responsibilities of different actors, and the incentives and barriers they face in reducing cyber risks in accordance with the framework for responsible behaviour in cyberspace. From these conversations, we identified seven key messages. These should not be perceived as a comprehensive list, and as the Geneva Dialogue continues, the list of key findings and messages may expand.
It is important to note that the Geneva Manual does not seek to define critical infrastructure, recognising that each country may prefer its own approach to distinguishing between critical infrastructure (CI), critical information infrastructure (CII), and critical national infrastructure (CNI). The Manual acknowledges these varying definitions and, for the sake of clarity and consistency, uses the term ‘critical infrastructure’ throughout this document as an umbrella term.
Message #1: More international efforts are required to understand and protect cross-jurisdictional interdependencies in some CI sectors with regional and international impact.
There is no universal approach to defining CI or critical information infrastructure (CII), as each country defines its own CI. Research by the German Council on Foreign Relations (DGAP)4Weber, Valentin, Maria Pericàs Riera, and Emma Laumann. ‘Mapping the World’s Critical Infrastructure Sectors.’ DGAP Policy Brief 35 (2023). German Council on Foreign Relations, November 2023, available at https://doi.org/10.60823/DGAP-23-39548-en reveals a broader lack of standardised terminology and categorisation for CI and its associated sectors. While the term ‘critical infrastructure’ is widely used, variations exist, such as ‘activities of vital importance’ in France or ‘crucially important facilities’ in Belarus. Indeed, UN Member States have agreed that each ‘state determines which infrastructure or sectors it deems critical within its jurisdiction, in accordance with national priorities and methods of categorisation of critical infrastructure’.5UN GGE 2021 report (A/76/135). Despite this recognition, many countries still do not maintain comprehensive lists of their CI or CII sectors. According to the DGAP, 94 countries have yet to define their critical infrastructure, highlighting the ongoing challenge of achieving global consistency in CI identification and protection.
Typically, CI is understood to include sectors such as energy, water, transportation, telecommunications and media, healthcare, and finance – those sectors that are directly tied to national security and public safety. However, many assets, systems, and networks within national CI depend on international connectivity (e.g. submarine cables), protocols (e.g. Border Gateway Protocol), and essential services (e.g. Cloud services), which are not necessarily classified as critical at the national level (i.e. supra-national) and, therefore, may lack the same expected level of protection. Recognising this challenge, some governments have begun refining their definitions to account for infrastructure that underpins national security yet operates across borders. For instance, Singapore has introduced6More details are provided in the comparative analysis of certain national legal frameworks for CI/CII protection in Annex. definitions for foundational digital (virtual) infrastructure adjacent to critical infrastructure, acknowledging the role of virtual services and networks that, while not always classified as CI, are crucial for national resilience.
At the same time, some states, such as China and Russia, have adopted the concept of CII rather than CI in their national legal frameworks. However, for simplicity in this document, we will use CI/CII interchangeably throughout the text, as both terms broadly refer to infrastructure essential for national security, economic stability, and public safety. The distinction, however, reflects a broader emphasis on the protection of digital and information systems, including data storage, cloud services, and internet platforms, which these states view as essential to national security and state control. These approaches contrast with traditional CI definitions by prioritising digital resilience and state oversight over physical infrastructure alone. However, despite these national efforts, the broader lack of international coordination to define and secure cross-border interdependencies complicates efforts for CI and CII operators/owners to set clear boundaries for implementing and enforcing security measures and compliance. This results in overlaps and gaps in protection, leaving vulnerabilities and risks for disruption of interconnected systems.
Message #2: Secrecy in defining CI for national security reasons limits the awareness of relevant stakeholders to support states’ efforts in CIP.
As it is each country’s sovereign right to define its own CI, some countries9For instance, Singapore: https://www.csa.gov.sg/faqs/cybersecurity-act and the US: https://www.whitehouse.gov/briefing-room/presidential-actions/2024/04/30/national-security-memorandum-on-critical-infrastructure-security-and-resilience/ prefer to keep such a list secret, for national security reasons. Publicly disclosing which organisations are classified as CI could expose them to increased risk of targeted attacks. At the same time, experts highlighted that secrecy over transparency in identification and protection of CI may significantly limit relevant stakeholders’ ability to implement the agreed norms and transparency-related CBMs. In particular, DGAP notes that countries that publicly list their CI sectors have not experienced more frequent attacks than those that have yet to define CI. In fact, countries that have codified their CI sectors tend to be more effective in implementing measures to protect critical infrastructure. The European Union’s NIS and NIS2 directives serve as examples of regulations that strengthen CI protection. ‘Without a definition of CI, protection is not possible.’10Weber, Valentin, Maria Pericàs Riera, and Emma Laumann. ‘Mapping the World’s Critical Infrastructure Sectors.’ DGAP Policy Brief 35 (2023). German Council on Foreign Relations, November 2023, available at https://doi.org/10.60823/DGAP-23-39548-en
A possible compromise is to adopt a layered approach in which the list of CI entities remains confidential, but general information about the definition of CI, sectors and categories of CI is made publicly available.11For instance, this is how it is done in several countries: the US CISA https://www.cisa.gov/topics/critical-infrastructure-security-and-resilience/critical-infrastructure-sectors and Switzerland https://www.babs.admin.ch/de/die-kritischen-infrastrukturen and https://backend.babs.admin.ch/fileservice/sdweb-docs-prod-babsch-files/files/2023/12/12/c81e27b3-030c-47ed-81cf-ae9409c2572b.pdf This layered approach, combined with mechanisms for secure information sharing among trusted stakeholders, can help strike a balance between transparency and confidentiality.
Additionally, public, non-sensitive research and analysis on sector-specific threats and vulnerabilities, as well as information on national priorities and approaches to secure CI sectors, would provide guidance and help relevant stakeholders improve their security posture and support states’ efforts in CIP.
Message #3: The lack of commonly accepted minimum cybersecurity requirements to protect CI results in the limitation of efforts to achieve cyber resilience.
CI across different jurisdictions are highly interconnected and depend on global service providers, such as cloud computing, data centres, and international communication networks (e.g. many CI sectors use shared infrastructure such as fibre-optic cables or communication protocols that span multiple countries; an incident affecting these services can disrupt CI systems across the country or even the region). Commonly accepted and clearly defined minimum security requirements would increase the resilience and protection of CI by contributing to the standardisation of vulnerability handling and threat information sharing, as well as enhancing the effectiveness of incident response. Such requirements should be risk-based and the risk assessments should consider different threats and risks, their likelihood, as well as the existing and potential vulnerabilities. Additionally, such minimum cybersecurity requirements can directly address the UN GGE norms and include standardised templates for requesting assistance in protecting CI, including from relevant stakeholders (e.g. CI operators/owners, incident response community, cybersecurity experts, product vendors and service providers) and responding to such requests.
In addition to these foundational requirements, there is a growing need to focus on raising awareness of software supply chain risks for CI and establishing baseline good practices that specifically address these risks. The first chapter of the Geneva Manual highlights the lack of standardised approaches for implementing norm I on ICT supply chain security. Therefore, standardising security practices for the software supply chain, including open source components, can help prevent these widely used tools from becoming weak points in the cybersecurity posture of critical infrastructure. Relevant stakeholders from industry (CI operators/owners, product vendors and service providers, cybersecurity experts) and technical community (including OSS community) could collaborate within Standards Development Organizations (SDOs). Governments can further support this collaboration by encouraging and funding the participation of NGOs,12‘Government’s Role in Increasing Software Supply Chain Security: Toolbox for Policymakers’. Authored by Christina Rupp and Dr. Alexandra Paulus. Interfacte, 2 March 2023, available at https://www.interface-eu.org/publications/governments-role-increasing-software-supply-chain-security-toolbox-policy-makers, accessed January 5, 2025 particularly civil society organisations, which are often underrepresented in SDOs but can offer essential societal perspectives on software supply chain security in the context of CIP.
Message #4: Obstacles for vulnerability management13Vulnerability management (VM) involves practices and controls to ensure products are updated with the latest security patches. It covers managing security flaws in both in-house and third-party components, including receiving and analysing vulnerability reports, assessing risks, coordinating mitigation efforts, and issuing security advisories when necessary. For more discussion, see Geneva Dialogue Output Report ‘Security of digital products and services: Reducing vulnerabilities and secure design: Good practices’, December 2020, available at https://genevadialogue.ch/wp-content/uploads/Geneva-Dialogue-Industry-Good-Practices-Dec2020.pdf in industrial control systems (ICS) in the context of CI leave such systems with inherent and unnoticed vulnerabilities creating cybersecurity risks.
ICS in CI sectors, such as energy, chemical, and manufacturing, face unique and complex vulnerability management challenges. These systems are often built on proprietary, closed-source technologies developed by various manufacturers, which can complicate efforts to detect, analyse, and address software vulnerabilities. Unlike traditional IT systems, which may rely on widely used and regularly updated components, ICS systems tend to be specialised, with components that often have long operational lifespans and limited flexibility for modification or updates. As a result, vulnerabilities in ICS may go unnoticed or unaddressed for extended periods, creating cybersecurity risks.
The main challenge in ICS security is patching. While individual components can be tested for vulnerabilities, ICS systems face unique issues. Unlike IT networks, ICS vulnerabilities stem from the need to follow strict safety regulations while keeping operations running smoothly. The integration of ICS with physical processes makes it hard to apply security updates without disrupting operations. Furthermore, safety regulations often focus on maintaining continuity and physical safety, which creates a conflict when trying to apply timely security measures.
Updating or patching ICS components typically requires scheduled maintenance windows, which can sometimes involve temporarily taking the facility offline. While these updates may not always be cybersecurity-related, they are necessary for maintaining system performance and security, and often make financial sense in terms of minimising operational disruptions. However, even during these scheduled maintenance periods, critical infrastructure systems may continue to operate with known vulnerabilities, which may not align with urgent cybersecurity needs. This creates a window of risk where cyber threats could potentially exploit these unpatched vulnerabilities, particularly when the timing of maintenance windows does not coincide with immediate security priorities.
ICS components are often proprietary and developed by various vendors, making the process of acquiring and analysing them for vulnerabilities difficult and expensive. Unlike IT systems, where software can be easily downloaded for analysis, accessing ICS components such as Programmable Logic Controllers (PLCs), Remote Terminal Units (RTUs) or other specialised equipment is more challenging due to their high costs, proprietary nature, and limited access to the underlying software. This creates a barrier for independent researchers, slowing down the identification and fixing of vulnerabilities.
Despite these challenges, vulnerabilities in ICS are still identified and disclosed, as demonstrated by advisories from organisations such as CISA. These efforts highlight that vulnerability research and responsible disclosure are critical components of strengthening ICS security. However, it is essential for governments, industry stakeholders, and cybersecurity experts to foster a more collaborative environment that encourages research, improves communication between researchers and vendors, and promotes timely updates to reduce risks in ICS networks.
Message #5: The technical community – such as cybersecurity researchers, incident response experts, and others – finds it increasingly difficult to remain politically neutral, which creates security risks for CIP and securing ICT across different jurisdictions.
The technical community – including cybersecurity researchers, incident response experts, and other specialists – faces increasing challenges in maintaining independence and effectiveness amidst growing geopolitical tensions and restrictions on cross-border cooperation. These challenges create significant security risks for the protection of CI and the securing of ICT systems across jurisdictions.
Rising restrictions on technical cooperation and information exchange, such as those imposed by national security laws,14For instance, see the US Export Administration Regulations (EAR) under 15 C.F.R. §§ 730–774 available at https://www.ecfr.gov/current/title-15/subtitle-B/chapter-VII/subchapter-C/part-730 and ‘Regulations on the Management of Security Vulnerabilities in Network Products’, Cyberspace Administration of China (CAC), 2021, available at https://www.cac.gov.cn/2021-07/13/c_1627761607640342.htm export controls, and geopolitical sanctions, disrupt the ability of experts to collaborate effectively. For example, sanctions on certain countries or restrictions on exporting cybersecurity tools and knowledge can prevent researchers and incident responders from addressing shared threats like ransomware campaigns or state-sponsored attacks. These limitations create operational silos, hinder trust-building, and result in fragmented threat intelligence ecosystems, making it harder to mount a unified defense against sophisticated adversaries.
Increasing restrictions on technical cooperation and information exchange cause discord and impact the work of cybersecurity researchers, incident responders, and other experts who protect cross-border ICT networks and systems. These restrictions limit such communities’ exchange of threat information, vulnerability and incident information sharing in the times of complex geopolitical tensions.15For more discussion, see FIRST Press release on Teams suspension from FIRST, 25 March 2022, available at https://www.first.org/newsroom/releases/20220325 and ZDNet’s article on removing Russian maintainers of Linux kernel, 24 October 2024, available at https://www.zdnet.com/article/why-remove-russian-maintainers-of-linux-kernel-heres-what-torvalds-says
The UN GGE 2015 report in the CBM’s section provides16UN General Assembly, Report of the Group of Governmental Experts on Developments in the Field of Information and Telecommunications in the Context of International Security, A/70/174, 2015, available at https://dig.watch/resource/un-gge-report-2015-a70174 that ‘States should seek to facilitate cross-border cooperation to address critical infrastructure vulnerabilities that transcend national borders’, and the third Annual Progress Report in Annex B on CBMs states that the states ‘exchange information and best practice on, inter alia, the protection of critical infrastructure (CI) and critical information infrastructure (CII), including through related capacity building’.17A/79/214, July 2024, available at https://docs.un.org/en/A/79/214 Geneva Dialogue experts agreed on the significant challenges associated with the cross-border incident and threat information sharing for CIP. The lack of information sharing between states or between relevant stakeholders (e.g. cybersecurity researchers or incident response teams) from different jurisdictions makes it easier for threat actors to exploit vulnerabilities and increases the security risks for all.
Message #6: The UN GGE norm F may not fully address the protection of CI, as it primarily focuses on intentional damage, potentially overlooking other scenarios that could affect CI security.
The UN GGE norm F focuses on intentional damage that impairs the use and operation of CI in providing services to the public. However, intentional damage is not limited to physical destruction; it can also include non-physical harms, such as disruptions to systems, economic losses, or psychological and social impacts. Cyber activities targeting or affecting CI frequently result in such non-physical harms,18’Chinese Cyberattacks on Taiwan Government Averaged 2.4 Million a Day in 2024, Report Says’, Reuters, January 6, 2025, available at https://www.reuters.com/technology/cybersecurity/chinese-cyberattacks-taiwan-government-averaged-24-mln-day-2024-report-says-2025-01-06 which are not defined as attacks under international law. Moreover, intentional damage may have significant follow-on effects or collateral impacts, such as cascading failures across interconnected systems or long-term societal disruptions. To strengthen protections for CI operators/owners and owners, it is crucial to clarify this norm to address both direct physical harms (e.g., physical damage due to kinetic effects) and indirect harms (e.g. economic disruption, data compromise). Experts further emphasise the importance of defining harm in a comprehensive and measurable manner, supported by evidence-based metrics and data-driven tools.19Statement on Advancing the framework of responsible State behaviour in cyberspace through the Harms Methodology, CyberPeace Institute, 21 March 2024, available at https://cyberpeaceinstitute.org/news/advancing-responsible-state-behaviour-in-cyberspace-harms-methodology/
Message #7: The increase in inter-state conflicts underscores the need for states to provide clear legal guidance to private entities, helping to protect them and support their efforts in CIP.
Experts had different views on whether a legal distinction between wartime and peacetime is necessary in cyberspace. Some argued that cyberspace has never truly been at peace but has inherently remained a domain of continuous competition, conflict, and contestation. They suggested that such a distinction becomes superficial, as many cyber operations affecting CI fall below the threshold of armed conflict but still cause significant damage.21For instance, Times of India, ‘Israel-Hezbollah War: Iran Hit by Massive Cyberattacks, Nuclear Facilities and Government Agencies Targeted’”, Times of India, 13 October 2024, available at https://timesofindia.indiatimes.com/technology/tech-news/israel-hezbollah-war-iran-hit-by-massive-cyberattacks-nuclear-facilities-and-government-agencies-targeted/articleshow/114195005.cms and Industrial Cyber, ‘Cyber Attacks Continue to Hit Critical Infrastructure, Exposing Vulnerabilities in Oil, Water, Healthcare Sectors’, Industrial Cyber, 14 February 2024, available at https://industrialcyber.co/critical-infrastructure/cyber-attacks-continue-to-hit-critical-infrastructure-exposing-vulnerabilities-in-oil-water-healthcare-sectors/
At the same time, other experts underlined that the legal distinction between wartime and peacetime is critical as these are two different legal regimes. Even though UN GGE norms are peacetime norms, there is a distinction between war and non-war from an international law point of view in applicability of the law of war.
Experts also discussed that the distinction between a threat or use of force and an armed attack is critical in determining the appropriate legal response in both physical and cyber domains, and the line between them can be complex, particularly in the context of cyber activities. A ‘use of force’ refers to actions that may not necessarily trigger a right to self-defence, while an ‘armed attack’ would justify such a response under international law. The determination of whether an act qualifies as a threat, use of force, or armed attack is to be made by considering all relevant circumstances, including the scale, effects, and intent behind the action. It is important to note that these decisions are rarely confined to the cyber domain alone, as cyber operations often intersect with physical or geopolitical considerations. As cyberattacks increasingly affect CI, the threshold for what constitutes an armed attack in cyberspace should be analysed in the broader context of international law. For instance, the Tallinn Manual was cited by experts as an example of comprehensive frameworks for assessing cyber activities against established legal norms.
One of the other concerns is the extent to which governments rely on private-sector entities, often from a foreign state, for IT services and operational support during peacetime, but also in times of crisis and conflict. This engagement can take multiple forms, ranging from direct contractual relationships with defense and intelligence agencies, to more informal cooperation, such as providing cybersecurity assistance, intelligence sharing, or even restricting access to digital services in contested areas.
As these private entities – especially large technology firms – assume critical roles in cybersecurity, their decisions and actions can have geopolitical consequences, sometimes even surpassing the influence of certain states. This raises questions about accountability and governance. The role of industry in modern conflicts has increased and major tech companies ‘have evolved into de facto political players, not only by dedicating cybersecurity resources to conflict participants, but also through the ways they – intentionally or otherwise – shape public perceptions of the crisis’.22‘Public-private collaboration in Ukraine and beyond’, by Taylor Grossman, Monica Kello, James Shires, and Max Smeets. Binding Hook, April 2024: https://bindinghook.com/articles-binding-edge/public-private-collaboration-in-ukraine-and-beyond/ They do so by leveraging powerful cyber intelligence capabilities, such as advanced threat monitoring and analysis, which produce threat intelligence reports that influence the opinions and risk assessments of various actors, including CI owners outside of the conflict zone. To what extent should these private actors be bound by international norms governing responsible state behavior in cyberspace? How does this UN framework of responsible behaviour, including cyber norms translate to private actors’ actions in cyberspace during an (armed) conflict?
Compounding this issue is the fact that private actors and civilians today are more likely to become, intentionally or not, involved in armed conflicts23Several reviewers have noted the Montreux Document, which is primarily addressed to States but also outlines good practices that may be valuable for other entities, such as international organisations, NGOs, companies contracting private military and security companies (PMSCs), and the PMSCs themselves. The Montreux Document On pertinent international legal obligations and good practices for States related to operations of private military and security companies during armed conflict, 2008, available at https://www.montreuxdocument.org/pdf/document/en.pdf – whether through cyber operations, digital platforms, or critical infrastructure – yet their roles and legal status are not always clear. Experts in the Geneva Dialogue have discussed that the agreed UN GGE norms do not create obligations for non-state stakeholders and that it’s the responsibility of governments to determine how the agreed cyber norms and international law should be implemented by stakeholders. However, as private-sector involvement in cybersecurity and conflict-related activities grows, there is an urgent need to clarify their responsibilities, particularly for large, influential companies.
At the same time, legal experts note24Jonathan Horowitz, ‘The Business of Battle: The Role of Private Tech in Conflict’, Lawfare, August 14, 2020, available at https://www.lawfaremedia.org/article/the-business-of-battle–the-role-of-private-tech-in-conflict that the current geopolitical environment heightens security risks for tech companies, their employees, and users. The International Committee of the Red Cross (ICRC) has outlined eight rules for ‘civilian hackers’ during war, and four obligations for states to restrain them. Notably, the ICRC emphasises that states have a due diligence obligation to prevent violations of international humanitarian law (IHL) by civilian hackers within their territory. However, the reality of cyber operations often disregards territorial boundaries, creating a legal gap in accountability.
Key roles and responsibilities: How can non-state stakeholders protect CI?
One of the goals of the Geneva Dialogue and Geneva Manual is to break down agreed cyber norms into practical actions that different stakeholders can take to reduce cyber risks. Below is a summary of key roles in implementing CIP-related norms, along with suggested actions – though this is not a comprehensive list.
Annex: Comparative analysis of how states approach CIP
Executive summary
As cyberthreats grow more sophisticated and geopolitical tensions reshape global security, governments worldwide are expanding their approach to critical infrastructure protection (CIP). From ransomware attacks on energy grids to state-sponsored cyber espionage, threats to essential services – such as finance, healthcare, telecommunications, and supply chains – are more pervasive than ever. In response, countries have broadened their regulatory powers, tightened controls over foreign technology, and imposed mandatory cybersecurity requirements. While the specific governance models and enforcement mechanisms differ, the overall trend is clear: governments are taking a more assertive role in securing critical infrastructure.
We have analysed how Australia, China, the European Union, Russia, Singapore, and the United States approach CIP, examining their strategies over the past 3–4 years. Our aim is to uncover key lessons to better understand how these states operationalise the UN GGE norms and enhance the protection of critical infrastructure. These countries were selected due to their representation of key actors in the field and their recent efforts to amend legal frameworks in response to external challenges. Our analysis seeks to inform ongoing discussions within the Geneva Dialogue on Responsible Behaviour in Cyberspace, particularly concerning the implementation of norms with non-state stakeholders.
Several key observations emerge from this analysis. First of all, whether through expanded intervention powers (Australia), extraterritorial regulatory reach (Singapore), strict data security controls (Russia and China), mandatory cybersecurity obligations (the European Union) and mandatory incident reporting and disclosure obligations (the USA), nations are reshaping their legal frameworks to address an increasingly complex threat landscape. As cyberthreats evolve, future CIP strategies will require a balance between regulatory control, technological sovereignty, and cross-border cooperation to ensure national resilience in an interconnected world.
There are other common trends we have observed in how selected jurisdictions – Australia, China, European Union, Russia, Singapore, and the United States – approach CIP. Nearly all governments prioritise supply chain security and impose stricter regulations on foreign technology providers. While the United States, Australia, and the EU take a targeted approach to ban individual companies from adversarial nations, China and Russia aim to go further in achieving supply chain resilience pursuing comprehensive technological self-reliance in CI/CII sectors.
Additionally, nations have also expanded their definition of critical infrastructure or critical information infrastructure, particularly in response to the COVID-19 pandemic and rising geopolitical instability. Virtual systems of transnational nature (cloud computing, data centers, globally distributed IT operations) have been widely recognised as integral to national security, leading to expanded regulatory oversight over such services beyond national borders. This trend is also seen in expanding regulatory powers beyond traditional CI operators to include service providers (Singapore), cloud vendors (all countries), or manufacturers of digital products and software regardless of whether they are based in a jurisdiction or not (the EU). In this context, Russia and China have been pioneers imposing strict security obligations, including for overseas companies that provide essential services to their CII, as these countries have long been focusing on regulating data and information security to minimise foreign influence over their data ecosystems and critical infrastructure networks.
A decade ago, many nations relied on voluntary cybersecurity frameworks. Today, they have moved toward mandatory compliance models, requiring risk assessments, cyber incident reporting, and even mandatory vulnerability reporting (e.g. the EU and China). Vulnerability management and disclosure processes, supply chain security have been integrated into the list of security obligations for CI operators/owners in all examples analysed below. This reflects a shift toward proactive risk mitigation rather than reactive crisis management. Several states are also similar in their approach to cybersecurity as a core business process, introducing corporate accountability for executive management in cybersecurity efforts to achieve cyber-resilience.
The way states define CI or CII reveals their security priorities, governance philosophy, and economic strategy. Some countries, like Australia, define CI broadly to include both physical and digital assets, as well as communication networks and supply chains across 11 sectors, reflecting the interconnected nature of modern infrastructure. In contrast, Singapore focuses primarily on CII, defined as computer systems crucial for delivering essential services. China and Russia also focus on defining CII.
At the same time, despite these advancements in states’ effort to operationalise the agreed cyber norms and improve their CIP legal frameworks, there are several areas which, we believe, are missing in analysed approaches to CIP. In particular, there is a lack of measures for cross-border coordination and intelligence sharing with other jurisdictions. The EU, due to its intergovernmental nature, is an exception, though even within its framework, information sharing on CI vulnerabilities or incidents with non-EU nations remains limited. Stronger cross-border intelligence-sharing agreements and joint response mechanisms to secure CI that have a cross-border nature or that have a significant impact for several jurisdictions would help address transnational threats, and would contribute to the operationalisation of the agreed confidence-building measures (CBMs).
Additionally, while many countries focus on vendor bans and stricter foreign technology regulations, there is insufficient emphasis on strengthening supply chain resilience in the context of CIP in a more coordinated regional and global manner. In particular, securing open-source software and addressing vulnerabilities beyond individual vendors remain overlooked. Current frameworks also place primary responsibility for CIP on private-sector operators, with governments acting as regulatory enforcers rather than active defenders. While Australia has expanded government intervention powers, most national legal frameworks lack sufficient measures or guidance for CIP operators/owners to systematically address cyberattacks at their infrastructure and support them in developing robust defense mechanisms.
Overall, it is important to acknowledge the difficulty in finding evidence of how states integrate these non-binding norms into their policymaking narratives. This keeps the agreed framework largely confined to UN OEWG discussions and, in turn, limits its practical impact. Additionally, there is little transparency on how states operationalise norms and confidence-building measures to secure CI within their domestic policies and in cooperation with other states.This lack of visibility challenges the credibility of the agreed framework, making it more abstract and less effective as a practical tool for international cyber stability.
For non-state stakeholders – including industry players, technology providers, the technical community, civil society, and academia – the key takeaway is the need for deeper collaboration in translating agreed cyber norms into actionable practices. Governments alone cannot guarantee the security of critical infrastructure; a multi-stakeholder approach is essential. As national sovereignty concerns increasingly shape cybersecurity policies, striking a balance between these priorities and fostering cross-border cooperation will be critical to developing a resilient and sustainable CIP framework in an interconnected world.
Leave A Comment