Cyber stability under pressure: A reality check for cyber norms in an era of AI-driven cyber risks

Loading Events

Time: 08:00-10:30 UTC / 10:00–12:30 CEST

On-site registration: RSVP to genevadialogue@diplomacy.edu, and also register for the Geneva Cyber Week, before 30 April

2026 Geneva Cyber Week

Cyber stability is under increasing strain — not only from more sophisticated attacks, but from the rapid integration of artificial intelligence into both the tools used to carry them out and those used to defend against them. The same technology that makes defenders faster also makes attackers faster. The same AI model that helps a security team identify weaknesses in their own systems can help an adversary find them first.

At the same time, several major AI providers  have revised the terms governing how their models may be used, including in some cases terms that previously restricted military and national security applications. Governments in a number of jurisdictions have actively sought to expand their access to commercial AI capabilities for defence and intelligence purposes. There is evidence that criminal and APT groups — including those allegedly affiliated with states — are increasingly adopting commercial AI tools to automate cyber attacks at greater scale, while reducing the investment in time and human resources required. Commercial AI security products, including those being procured by critical infrastructure operators, are built on underlying models whose permitted uses and governance terms may not be fully visible to the organisations deploying them.

This raises fundamental questions: when the same AI tools serve both attack and defence, what does “responsible use” actually mean in practice? Who sets the boundaries, and what happens when those boundaries are moved? How do existing cyber norms hold up when the technology they are supposed to govern has changed faster than the norms themselves?This scenario-based session takes place during the Geneva Cyber Week and is open to both onsite and online participants. It brings together experts and decision-makers from across stakeholder groups — including public policymakers, critical infrastructure operators, technology providers, cybersecurity practitioners, AI governance specialists, compliance and risk professionals, and civil society and academic experts.

The session will be held under the Chatham House Rule.

Its findings will directly inform the third chapter of the Geneva Manual on Responsible Behaviour in Cyberspace. To join online, please RSVP at genevadialogue@diplomacy.edu.

Go to Top