Open source software (OSS) is effectively a part of critical digital infrastructure we rely on, and this was perhaps the clearest message to emerge from the Geneva Dialogue’s first masterclass of its 2026 programme, which brought together three experts to examine how security responsibilities are allocated for OSS in different contexts, how the most comprehensive regulatory framework for software development — the EU’s Cyber Resilience Act (CRA) — lays down these responsibilities, and how, despite all of this, threat actors continue finding ways to exploit trust in OSS supply chains and what implications can be drawn for practitioners.

The session kicked off the 2026 programme of the Geneva Dialogue on Responsible Behaviour in Cyberspace, which aims to stress test existing cyber norms and cybersecurity practices under geopolitical pressure, technological development, and real-world constraints through four key topics, with security and governance of OSS chosen as the first, and to look at how non-state actors navigate this complex environment in making their decisions around cybersecurity. The Geneva Dialogue’s goal is to analyse not whether cyber norms exist on paper, but whether they hold when geopolitical pressures mount, technology accelerates, and accountability becomes genuinely contested.

Three angles on one problem

Governance and Policy: Accountability Follows Benefit

Mika Lauhde (Luxembourg House of Cybersecurity) opened the discussion by reframing the accountability question in deliberately stark terms. Why, he asked, should security responsibilities differ depending on whether an organisation uses proprietary or open source software? If you operate the system, you own the risk, regardless of licensing model.

The argument is deceptively simple but carries significant weight for policy. With 96% of all software globally containing open source components, the distinction between “open” and “proprietary” is effectively meaningless as a basis for assigning security obligations. What matters is who receives commercial benefit and who exercises operational control. His formulation – “shared accountability often becomes nobody’s accountability” – captures a governance failure that is both technical and political.

If somebody else is controlling and having all these control points for your software, you can’t really say you are digitally independent.”

Mika also elevated the stakes beyond compliance to digital sovereignty. For humanitarian organisations, international bodies, and governments claiming neutrality and independence, dependence on proprietary platforms controlled by a handful of dominant providers is not merely a procurement issue – it is a structural vulnerability. The case for treating open source as strategic digital infrastructure is, in this reading, also a case for genuine institutional autonomy.

Regulation: CRA as catalyst for collaboration

Roman Zhukov (Red Hat) brought a regulatory lens to a community that has often viewed the EU’s Cyber Resilience Act with anxiety. His central clarification was important: the CRA imposes no obligations on open source maintainers who do not monetise their work. Requirements fall on manufacturers, i.e. companies that incorporate open source into commercial products placed on the EU market.

This distinction matters enormously for how the open source community engages with the regulation. Rather than viewing the CRA as a threat, Roman argued that it creates an unprecedented mechanism for collaboration: manufacturers must now conduct due diligence on their open source dependencies, (be ready to) report actively exploited vulnerabilities to CSIRTs within 24 hours, and share security fixes upstream. In effect, the regulation creates structured incentives for the private sector to invest in the open source commons it depends upon.

The risk-based approach embedded in the CRA gives manufacturers flexibility. In particular, there are no prescribed windows for fixing all vulnerabilities, only reporting obligations for actively exploited ones, while the concept of ‘open source software stewards’ creates a formal role for foundations and consortia that provide ongoing support for commercially-used projects.

Yet both Roman and Mika both acknowledged that the regulation is a live territory, not settled law. Consultation processes were still underway at the time of the masterclass, with specific ambiguities around hybrid commercial/community projects and non-profit service providers still to be resolved. The message to the open source community was clear: engage with the process rather than exit from it.

Supply chain operations: The inheritance problem

Costin Raiu (TLP Black) brought a threat intelligence perspective that grounded the policy and regulatory discussion in operational reality. His analysis of three major supply chain attacks (the XZ Utils backdoor, SolarWinds, and the Notepad++ compromise) illustrated a common pattern: sophisticated attackers do not breach the target directly. They find the weakest link in a chain of dependencies, often one that is under-resourced, lightly scrutinised, and trusted implicitly.

The XZ Utils case is instructive in its scale and its method. An unknown attacker spent approximately three years contributing to the project, building trust and community standing, before using social engineering to pressure the maintainer into ceding control, and injecting a backdoor that would have provided cryptographic access to any Linux system worldwide. It was discovered by accident, when a Microsoft engineer noticed SSH logins running fractionally slower than expected during performance testing. Attribution remains unknown.

You inherit the entire history of the ecosystem you use.You’re not just trusting a single developer — you’re trusting a global web of contributors, hosting providers, automated update scripts, digital certificates, hardware suppliers.”

The average organisation, Costin noted, has approximately 50 hardware and software suppliers. Large organisations may have hundreds or thousands. At that scale, comprehensive risk assessment is not merely difficult –  it is functionally impossible with current tooling and resourcing. The implication is uncomfortable: end users may have limited ability to prevent supply chain compromises, and must therefore invest heavily in detection and recovery capabilities alongside any preventive measures.

Takeaways for Geneva Dialogue discussions on non-state actors’ responsibilities and cyber norms

The masterclass illustrated something important about the Geneva Dialogue’s distinctive contribution: the space it creates for technically grounded, cross-sector conversation that state-centric forums struggle to replicate.

The (often asked) central question – who is responsible for open source security -is not resolvable through intergovernmental negotiation alone. The maintainers of the most critical open source projects are often individuals or small community organisations with no formal standing in policy processes. The manufacturers who depend on their work are frequently global companies subject to multiple and sometimes conflicting regulatory jurisdictions. The governments and international organisations that use both sit downstream of decisions made without their input.

The Geneva Dialogue’s framing of non-state actors as essential participants in cybersecurity is precisely calibrated to this problem. Several of those gaps have direct relevance to the broader norms conversation.

The discussion was candid about what it did not resolve. The lack of established frameworks for managing AI-generated code contributions challenges assumptions about how community-based quality control operates. For instance, both Costin and Mika acknowledged that AI tools are dramatically increasing the volume of code being contributed to open source projects, and that meaningful human review of all contributions may soon be impossible. No framework currently exists for managing this. The question of who sets quality standards, and who enforces them, remains open and worth discussing.

The absence of attribution for the most sophisticated supply chain attack in history suggests limits to what transparency norms can achieve when state-level actors operate through civilian infrastructure.

The politicisation of open source — through sanctions, export controls, and the decisions of US-headquartered foundations — creates risks of incompatible forks and fragmented ecosystems. If the norm of open, collaborative development erodes under geopolitical pressure, what replaces it? The masterclass surfaced the question; it did not answer it.

Costin also warned about the potential for compromised or poisoned AI models to insert subtle backdoors into code generated for millions of developers represents a qualitative shift in the threat landscape. If this attack vector matures, it may make current supply chain security frameworks — designed around human contributors and identifiable code changes — structurally inadequate.

The masterclass highlighted existing tensions in the accountability gap at the heart of open source security, and underscored once more why multistakeholder forums that can hold technical, regulatory, and geopolitical conversations simultaneously are not a luxury. In 2026, they are a necessity.