Governments are waking up to the risks of powerful AI, but most preparations still assume AI behaves like software that sometimes fails, not like an actor that could strategically resist control. The United Kingdom [1] has built state-backed evaluation capacity through its AI Safety Institute and is now incorporating frontier AI into national security resilience planning. European Union [2] law now mandates serious incident reporting, logging and human oversight for high-ri