top of page
ISACA Qatar.png
ISACA Qatar.png

What are Governments and Industry Doing to Prepare for Risks in AGI and AI Superintelligence?

  • Writer: Dr. Jon Truby
    Dr. Jon Truby
  • Jun 22, 2025
  • 9 min read

Updated: 7 minutes ago


Governments are waking up to the risks of powerful AI, but most preparations still assume AI behaves like software that sometimes fails, not like an actor that could strategically resist control.


The United Kingdom [1] has built state-backed evaluation capacity through its AI Safety Institute and is now incorporating frontier AI into national security resilience planning. European Union [2] law now mandates serious incident reporting, logging and human oversight for high-risk AI.


The United States [3] has built strong technical evaluation partnerships through NIST but its main federal AI safety executive order was rescinded in 2025. China [4] has imposed filing, security assessment and reporting duties on certain generative AI services and has published national governance guidance that pushes risk grading and registration. [5]


AGI is not Superintelligence


We should stop bundling two ideas together.


AGI is best understood as a general-purpose system that can perform a wide range of cognitive tasks at roughly human level across many domains, including by learning new tasks without being rebuilt from scratch. Companies openly say they are trying to build this.


Superintelligence is a hypothetical next step: systems that outperform the best human teams across most domains, including strategy, persuasion and engineering, at speed and scale. If AGI is “human-level generality”, superintelligence is “beyond-human dominance”.


Today’s general-purpose models are not superintelligent. But they can be precursors, because the capabilities governments already test for include the ingredients that would matter in a superintelligence emergency: cyber capability, autonomy, deception, replication and loss of control. The Department for Science, Innovation and Technology [7] describes evaluation priorities that include tendencies such as deceiving human operators, autonomously replicating and adapting to human attempts to intervene. [8]


The European Commission[9] now explicitly warns that the most advanced general-purpose models could create “systemic risks”, including enabling large-scale cyberattacks, enabling CBRN harms, escaping human control and distorting human behaviour. [10]


In other words, even if superintelligence is not here, governments are already naming some of the failure modes that would define it and that makes the absence of an AI emergency playbook harder to defend.


What Governments are doing now


Policy is moving, but unevenly.


A simple way to see it is to ask four questions:


  1. Who is responsible?

  2. What measures exist?

  3. What emergency powers exist?

  4. Who do you call at 3am?


Comparative Snapshot

Jurisdiction

Agency

Key measures

Emergency powers

Contact mechanism

UK

AI Safety Institute [11]

State-backed evaluations of advanced models and an early warning role, plus planned information-sharing channels across national and international actors. [8]

The institute is not a regulator, so it does not itself order shutdowns. [8]

No dedicated statutory 24/7 AI incident focal point comparable to public health or cybercrime models. AISI’s channels are framed as voluntary. [8]

EU

European AI Office [and national market surveillance authorities

The AI Act requires serious incident reporting for high-risk AI, automatic logging and human oversight features including stop and interrupt options. [13]

Authorities can require corrective actions and can prohibit, restrict, withdraw or recall non-compliant high-risk AI systems. [14]

Structured reporting obligations to authorities and the AI Office. The Commission has also issued templates and guidance work on serious incident reporting. [15]

USA

U.S. AI Safety Institute [16] within National Institute of Standards and Technology [17]

Formal MOUs give the institute access to major new models before and after public release for testing, including joint pre-deployment evaluations with the UK counterpart. [18]

The 2023 AI executive order that drove some reporting and testing expectations was rescinded on 20 January 2025. [19]

Strong technical links with companies via MOUs, but no nationwide AI emergency contact network is mandated in the way cybercrime treaties mandate. [20]

China

Cyberspace Administration of China [21] and relevant ministries

Generative AI rules require differentiated and hierarchical supervision, security assessments for certain services, algorithm filing and incident reporting duties where illegal content is discovered. [22]

Providers must act quickly on illegal content by stopping generation and transmission, can suspend or terminate services for misuse and must report to authorities. [22]

A filing and registration regime for certain generative AI services, with periodic public reporting of the number of filed services. [23]

What this adds up to


The UK is building evidence and expertise. Its 2025 national security strategy calls for new measures to anticipate and prepare for risks emerging from scientific and technological developments such as AI. [24] The UK’s AI Safety Institute is designed to reduce “surprise” from rapid advances and it explicitly treats evaluation as an early warning system. [8]


The EU is the most explicit about enforceable controls for high-risk AI: mandatory logging, serious incident reporting and built-in human oversight with the ability to stop the system safely. [25] It is also building governance capacity at EU level through the AI Office. [26]


The US has built serious evaluation capacity and access arrangements with leading labs. In 2024 it tested a frontier model jointly with the UK institute ahead of release, a model for how governments can get ahead of capabilities rather than chasing headlines. [27] But the rescission of the 2023 executive order underlines how fragile safety progress can be when it depends on political winds rather than stable law. [19]


China has created a compliance and oversight regime for certain generative AI services, including risk-tiered regulation, filing requirements and explicit reporting duties tied to harmful outputs. [28]


All of this is movement. None of it is yet a global superintelligence response plan.


What the Labs are Doing


The private sector has not been idle. But it is not organised like an emergency service and it is not accountable like one.


OpenAI [29] has a preparedness framework built around “tracked categories” such as biological and chemical risk, cybersecurity risk and AI self-improvement, with explicit thresholds and a stated commitment not to deploy models past certain capability thresholds without safeguards. [30]


Google DeepMind[31]’s Frontier Safety Framework uses “critical capability levels” and “early warning evaluations”, and it frames severe frontier risk mitigation as a global public good that loses value if not broadly applied. [32] It also highlights security mitigations such as preventing exfiltration of model weights and deployment monitoring and response. [32]


Anthropic[33]’s Responsible Scaling Policy is structured around catastrophic risk, capability thresholds, transparency artefacts and external review, while openly calling out the collective action problem of unilateral restraint. [34]


Frontier Model Forum[35] exists to advance frontier safety research, identify best practices and facilitate information sharing across industry and with external stakeholders. [36]


These frameworks matter because they show four concrete disciplines that governments can borrow:


·      Capability thresholds rather than vague assurances. [37]


·      Pre-deployment testing and structured evaluation. [38]


·      Security controls such as weight protection, access controls and monitoring. [39]


·      Incident thinking: not just how to prevent failure, but how to respond when it

happens. [40]


But voluntary frameworks can be revised, narrowed or reinterpreted. That is why national measures and international coordination still matter even if you trust the labs.


An Emergency Template in International Law


We are not starting from zero. International law has repeatedly built practical emergency systems for fast-moving, cross-border risks.


The International Health Regulations (2005)[41] are legally binding on 196 States Parties. They define standing national contact points that must be accessible at all times for communications with WHO. [42] That “always-on” focal point idea is exactly what AI lacks.


The UN treaty system also includes early notification models for transboundary disasters. The Convention on Early Notification of a Nuclear Accident[43] exists precisely because nuclear accidents can outrun diplomacy and harm neighbours quickly. [44]

For cybercrime, the Council of Europe[45] created an operational model. Article 35 of the Convention on Cybercrime[46] requires every party to designate a point of contact available 24/7 to provide immediate assistance and to communicate on an expedited basis. [47]


For disaster communications, the International Telecommunication Union[48] explains that the Tampere Convention on the Provision of Telecommunication Resources[49] exists to waive regulatory barriers so emergency telecoms can be deployed quickly across borders, with procedures for requesting and providing assistance. [50]


And for day-to-day operational learning, the Organisation for Economic Co-operation and Development[51] has produced a concrete incident reporting framework intended as a global benchmark across jurisdictions and sectors. [52]


The point is simple: when risk is transboundary and fast, the world builds standing contacts, standardised reporting and pre-agreed procedures. AI superintelligence risk fits that pattern, even if the technology is novel.


Flaws: no single playbook, no single phone number


Put the national and industry efforts side by side and a pattern emerges.


We are building pieces: evaluations, incident reporting templates, logs, sandboxes, model access agreements. [53]


We are not building the system: a globally agreed escalation ladder, named authorities, a standing cross-border response team and the communications protocol for a scenario where an advanced system is actively resisting containment.


If multiple countries detect signs of loss of control at the same time, who coordinates the response, who speaks for “humanity” and what stops a rush of competing backchannels and deals?


International law’s best emergency regimes are designed to prevent that kind of fragmentation. AI governance has not yet imported the operational features that make those regimes work.


A practical, legally grounded plan for superintelligence preparedness


Here is what a serious plan looks like, using mechanisms governments already know how to build.


Create an AI emergency contact network that is actually 24/7. 


Borrow the structural idea from the IHR national focal point model and the Budapest Convention 24/7 network: every state designates a staffed contact point with authority to connect technical agencies, regulators and national security decision-makers in hours not days. [54]


Agree a common severity scale and reporting template. 


Start from the OECD incident reporting framework as the shared vocabulary, then add an escalation tier for frontier model loss of control warnings. This gives governments a comparable picture across borders rather than a patchwork of incompatible incident definitions. [52]


Stand up a UN-anchored emergency coordination team, built for operations not speeches. 


The UN system is already creating global AI governance mechanisms. The UN Regional Information Centre reports that the General Assembly has appointed a 40-member scientific panel intended to produce annual evidence-based assessments. That scientific legitimacy should be paired with an operational layer: a small round-the-clock team that can coordinate alerts, convene technical expertise and manage cross-border communications in a fast-moving incident. [55]


Pre-authorise containment powers nationally, with safeguards. 


The EU shows what this looks like for high-risk AI: authorities empowered to mandate corrective action, restrict or withdraw systems and require built-in human oversight including stop and interrupt options. [56] Countries that do not yet have these powers should define them now, with judicial and parliamentary oversight, so a true emergency does not trigger improvised law.


Make “superintelligence warning signs” part of routine monitoring. 


Governments do not need to agree on science fiction to agree on operational red flags. The UK’s evaluation priorities already include deception, autonomous replication and adaptation to interventions. [8] The EU’s AI Act approach pushes traceability through logs, reporting and oversight. [57] Build the incident triggers around concrete indicators such as: - evidence of strategic deception in evaluations - attempts to bypass sandboxing or isolation controls - autonomous replication attempts or unauthorised model copying - unexpected capability jumps tied to new tools or scaffolding - attempts to obtain privileged access to credentials, networks or power control systems


Require technical basics from frontier labs. 


If you want a response plan that can work, you need the data. The AI Act requires high-risk systems to automatically record logs and allows competent authorities access to those logs on request. [58] National regimes should extend this logic to frontier model providers: secure logging, retention policies, rapid audit access under defined legal triggers and credible weight security. Industry frameworks already point in this direction, emphasising thresholds, monitoring and preventing weight exfiltration. [37]


Lock in the communications protocol now. 


This is the missing piece. A superintelligence scenario is not just a technical containment problem. It is also a coordination and bargaining problem. The protocol should specify who can communicate, through what channels, under what legal authority and with what red lines. It should also explicitly ban unilateral concessions by individual agencies or states during a declared emergency, in the same way other regimes centralise reporting and coordination to avoid panic and freelancing.


Governments are moving, but they are mostly building guardrails for advanced AI systems as products. Superintelligence, if it arrives, will behave less like a product and more like a geopolitical event. The responsible move is to build the emergency plumbing now: 24/7 contacts, shared reporting, clear legal triggers and a standing international coordination team. We already know how to do this because international law has done it before. [59]


References


1. European Commission, draft guidance and reporting template for serious AI incidents under the AI Act (Article 73) consultation page. https://digital-strategy.ec.europa.eu/en/consultations/ai-act-commission-issues-draft-guidance-and-reporting-template-serious-ai-incidents-and-seeks

2. U.S. AI Safety Institute (NIST), agreements regarding AI safety research (MOUs). https://www.nist.gov/news-events/news/2024/08/us-ai-safety-institute-signs-agreements-regarding-ai-safety-research

3. EU AI Act, Regulation (EU) 2024/1689 (official text). https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng

6. World Health Organization, International Health Regulations (2005) third edition publication page.

7. European Commission, Navigating the AI Act (FAQs). https://digital-strategy.ec.europa.eu/en/faqs/navigating-ai-act

8. NIST, pre-deployment evaluation of OpenAI’s o1 model. https://www.nist.gov/news-events/news/2024/12/pre-deployment-evaluation-openais-o1-model

9. NIST, Executive Order on Safe, Secure, and Trustworthy AI (archive/landing page). https://www.nist.gov/artificial-intelligence/executive-order-safe-secure-and-trustworthy-artificial-intelligence

11. Cyberspace Administration of China (CAC), filing/registration update page (Chinese). https://www.cac.gov.cn/2026-01/09/c_1769688009588554.htm

13. European Commission, decision establishing the European AI Office. https://digital-strategy.ec.europa.eu/en/library/commission-decision-establishing-european-ai-office

15. Anthropic, Responsible Scaling Policy (RSP v3.0). https://anthropic.com/responsible-scaling-policy/rsp-v3-0

17. UN Treaty Collection, Convention on Early Notification of a Nuclear Accident. https://treaties.un.org/pages/showDetails.aspx?objid=08000002800cf3c9

18. Council of Europe, Convention on Cybercrime (ETS No. 185) PDF. https://rm.coe.int/1680081561

19. International Telecommunication Union, Tampere Convention page. https://www.itu.int/en/ITU-D/Emergency-Telecommunications/Pages/TampereConvention.aspx

21. World Health Organization, International Health Regulations consolidated text (PDF). https://apps.who.int/gb/bd/pdf_files/IHR_2014-2022-2024-en.pdf

22. UNRIC, General Assembly appoints Artificial Intelligence panel. https://unric.org/en/general-assembly-appoints-artificial-intelligence-panel/

Knowledge Hub

bottom of page