AI Governance14 min read

The Four AI Laws Your Board Will Ask You About This Year (And What to Actually Say)

A practitioner's field guide to NIST, ISO 42001, the Australian Voluntary AI Safety Standard, and the EU AI Act — what they are, how they fit together, and what a leader should do in the next ninety days.

Robin Leonard
Robin Leonard
21 April 2026
The Four AI Laws Your Board Will Ask You About This Year (And What to Actually Say)

The Four AI Laws Your Board Will Ask You About This Year (And What to Actually Say)

A practitioner's field guide to NIST, ISO 42001, the Australian Voluntary AI Safety Standard, and the EU AI Act.

Disclaimer: I am not a lawyer, I am simply an AI nerd. This is not legal advice, and you should absolutely consult a lawyer.

A few weeks ago I was sat in a meeting room that had too much glass and not enough aircon, talking to the CIO and General Counsel of a large retailer. Forty-five minutes in, the GC opened a browser tab, flipped her laptop around, and said, "Robin, we've got four months. Walk me through this."

The tab was the EU AI Act compliance deadline for high-risk systems. 2 August 2026. She'd been told by a vendor the day before that it "probably doesn't apply" because the company was headquartered in Australia. She didn't believe them. She was right not to.

I've had some variant of that conversation in Sydney and Auckland in the last couple of months. Different industries, same expression. Boards are asking about AI risk. Customers are asking for evidence. Auditors are sharpening their pencils. And the executives in the room are suddenly realising that the four frameworks nobody briefed them on are about to determine the next three years of their AI programme.

So here's the briefing nobody is giving you. What NIST, ISO 42001, the Australian Voluntary AI Safety Standard, and the EU AI Act actually are. How they fit together. And what a leader should do about them in the next ninety days.

Grab a coffee. This one's worth reading properly.

Why This Suddenly Matters

If you've been treating AI governance as a 2027 problem, the calendar has bad news.

The EU AI Act entered into force on 1 August 2024 (European Commission). The ban on prohibited AI practices went live on 2 February 2025. General-purpose AI model obligations kicked in on 2 August 2025. The high-risk system obligations, the ones most likely to reach into a retailer or a bank or an insurer, go live on 2 August 2026. As I write this, that's roughly fifteen weeks away.

Meanwhile Australia's Department of Industry, Science and Resources released a Voluntary AI Safety Standard in September 2024 and consulted in parallel on mandatory guardrails for AI in high-risk settings (DISR). The consultation closed. The drafting continued. The direction of travel in Canberra is unambiguous.

On the international standards front, ISO/IEC 42001 has been live since December 2023 and is now the certification that enterprise procurement teams are quietly starting to require (ISO). NIST's AI Risk Management Framework has been the de facto operating manual in North America since January 2023, with a Generative AI Profile bolted on in July 2024 (NIST AI 600-1).

Four frameworks. Different origins, different mechanisms, converging fast. If your AI governance slide still says "we'll look at this next FY," you are about to have a very uncomfortable conversation.

Let's go through them.

Executive reviewing AI governance on a tablet with city skyline in the background

1. NIST AI Risk Management Framework: Your Operating Manual

The NIST AI RMF is the thing your practitioners should actually run. It was published by the US National Institute of Standards and Technology in January 2023 and is voluntary, non-regulatory, and pleasingly free (NIST AI 100-1). A Generative AI Profile, NIST AI 600-1, followed in July 2024 and covers the failure modes that keep modern CISOs awake: hallucination, data leakage through prompts, model poisoning, CBRN misuse, and IP infringement.

The framework is organised around four functions: Govern, Map, Measure, and Manage. Govern is the culture, policies, and accountability layer. Map is the bit where you work out what each AI use case actually does, who it affects, and what could go wrong. Measure is test, evaluate, monitor. Manage is prioritise, respond, and decommission. You apply these across the AI lifecycle, use case by use case.

NIST AI RMF four core functions diagram: Govern, Map, Measure, Manage

Why practitioners love it: it's pragmatic, it's free, it works in any industry, and it's well-regarded by auditors and regulators in both North America and APAC. The GenAI profile is honestly one of the best pieces of public-sector writing on AI risk I have read anywhere.

Why it isn't enough on its own: you can't get certified to NIST AI RMF. You can align to it, you can document your alignment, you can point to it in a board paper. You cannot wave a certificate at a procurement team. That's where the next one comes in.

2. ISO/IEC 42001: The Certifiable Wrapper

ISO/IEC 42001 is the first international standard for an AI Management System, published in December 2023 (ISO 42001 overview). It is certifiable. Independent auditors can issue a formal certificate that clients, regulators, and enterprise procurement teams will recognise. That is the entire point of it.

Structurally, 42001 will look familiar to anyone who has worked with ISO 27001. The management system clauses are essentially the same shape: leadership, planning, support, operation, performance evaluation, improvement. What makes it an AI standard is Annex A, a set of 38 controls spanning AI policy, resources, impact assessment, development, operation, third-party relationships, and customer-facing obligations. If you already run ISO 27001, 42001 slots alongside it without reinventing your governance architecture.

Here's where I need to speak plainly. In APAC, ISO 42001 certification is where SOC 2 was for SaaS in around 2019. Only a handful of organisations have achieved it. Most have never heard of it. The few that have are already being asked about it by large enterprise and public-sector buyers. Within the next 18 to 24 months, for anyone selling to a big customer with a functioning third-party risk team, it will become a door-opener, and in some tenders a straight-up prerequisite.

For consultancies and internal transformation teams, this is a genuinely lucrative service line: readiness assessment, gap remediation, pre-audit, certification support. Advisors who can actually deliver 42001 readiness across a real AI portfolio are rare right now. If you build that capability in the next year, you will be in demand.

One honest caveat. A certificate is not a moat. It's a floor. I've seen organisations get a shiny ISO 27001 badge and still have crap security, so don't mistake the certificate for the work. The point of 42001 is the management system it forces you to build, not the logo on the website.

3. Australian Voluntary AI Safety Standard: The Local Signal

The Australian Voluntary AI Safety Standard dropped in September 2024 (DISR standard). Ten guardrails: accountability, risk management, data governance, testing, human oversight, transparency for end users, contestability, supply chain transparency, stakeholder engagement, and conformity plus documentation.

If you squint, you will notice the ten guardrails were designed to align cleanly with NIST AI RMF and ISO/IEC 42001. That's not accidental. DISR explicitly mapped their guardrails to both standards in the accompanying guidance, which means any decent implementation of NIST or 42001 gets you most of the way there (DISR guidance). Awesome! Less work for you.

The reason to pay attention isn't the content. It's the signal. The Voluntary Standard was released alongside a Proposals Paper for Mandatory Guardrails for AI in High-Risk Settings (DISR proposals paper). Australia is heading toward mandatory AI regulation, and when that arrives, the Voluntary Standard is the on-ramp. Adopting it now is the cheapest way for an Australian business to get ahead of the mandatory regime.

For retail specifically, and this matters for a lot of clients I work with, the "high-risk" definitions under consultation would likely sweep in AI used in hiring and workforce management, credit decisions including BNPL and store credit, and any system with material impact on consumer rights. If you are using AI to screen job applicants, tune loyalty offers based on inferred consumer attributes, or make lending decisions, watch the consultation outcomes carefully. They will shape what your compliance team needs from you in the next 18 months.

4. EU AI Act: The Hard Floor

This is the one your board will have seen a headline about, usually involving a very large fine. Good news: most of the headlines are accurate.

The EU AI Act is the world's first comprehensive horizontal AI law. It entered into force on 1 August 2024. It applies extraterritorially, which in plain English means it reaches you whenever the output of your AI system is used in the EU market, regardless of where your business is headquartered (European Parliament). The retailer in Singapore with a handful of EU customers and an HR AI screening CVs for a Dublin warehouse is in scope. So is the Australian insurer with a London office. So is the Japanese manufacturer whose recommender powers a European storefront.

It's risk-based, and the tiers matter.

Unacceptable risk is straight-up banned as of 2 February 2025. This bucket captures social scoring by public authorities, manipulative AI, emotion recognition in workplaces or schools, certain kinds of biometric categorisation based on protected attributes, and predictive policing. If any part of your business is doing any of this, stop reading and call your lawyer. (European Commission FAQ)

High Risk is where most enterprise pain sits. This tier carries strict obligations: a documented risk management system, strong data governance, logging, transparency to users, human oversight, conformity assessment before market placement, and registration in an EU database. The high-risk obligations go live on 2 August 2026.

EU AI Act risk tier pyramid: Unacceptable, High, Limited, Minimal

For retail, the categories most likely to bite are AI used in employment and HR (CV screening, performance management, task allocation), and creditworthiness assessment. Biometric categorisation based on protected characteristics is outright banned. Recommender systems for most retailers sit under the Digital Services Act rather than the AI Act, but the overlap is real and worth a targeted review.

Limited risk systems face transparency obligations. If your chatbot is an AI, you tell the user. If your image is a deepfake, you label it. Not hard, but you need to actually do it.

Minimal risk is everything else. No obligations. The vast majority of AI systems sit here, which is a fact that rarely gets reported because it isn't scary.

Now the bit that makes boards pay attention. Penalties. Up to €35 million or 7% of global annual turnover, whichever is higher, for prohibited practices. Up to €15 million or 3% for most other breaches. Up to €7.5 million or 1% for supplying incorrect information to authorities. Those are worst-case maximums, not opening gambits, but the drafters did not choose those numbers by accident.

One more piece of the timeline most people miss. The obligations for high-risk AI embedded in products already covered by existing EU product safety legislation (things like medical devices, machinery, toys, and aviation equipment) kick in on 2 August 2027. If you manufacture physical products with embedded AI, you have an extra year, and you will need every day of it.

How the Four Frameworks Fit Together

This is the part that I spend most of my boardroom time drawing on whiteboards, so let me try to do it in prose.

Think of it as four layers stacked on top of each other.

The NIST AI RMF is your operating manual. It is the controls and processes that your practitioners actually run. It lives with your data and engineering teams.

ISO/IEC 42001 is your certifiable wrapper. It is the management system that proves you run the controls. It lives with your risk, compliance, and assurance functions.

The Australian Voluntary AI Safety Standard is your local alignment and readiness signal. Adopting it positions you for the mandatory regime that is coming, and it gives you something concrete to point at when your ANZ regulators start asking questions.

The EU AI Act is the hard regulatory floor. Mandatory, enforceable, backed by fines, already partially live. If you have any EU exposure at all, it sets the absolute minimum.

In a sensible AI governance programme, you build once and map four ways. Pick NIST AI RMF as your control framework. Map its outputs to the 38 Annex A controls in ISO 42001 if certification is on your roadmap. Cross-reference the 10 Australian guardrails in your internal policy documents. Run a targeted EU AI Act applicability assessment on every use case that touches the European market. That single artefact answers almost every regulator, auditor, and board question you will face for the next two to three years.

One programme. Four frameworks. Done properly, each reinforces the others. Done badly, you end up with four separate programmes, four separate sets of documentation, and four separate arguments with your CFO about budget.

Executive marking key dates on a large calendar - AI governance timeline

What Leaders Should Actually Do in the Next Ninety Days

Enough theory. Here is the shortlist I give clients, adapted for a reader who isn't sitting across a table from me.

1

Build an AI inventory. You cannot govern what you cannot see. Every AI use case. Every model. Every vendor that wedges a model into your stack without telling you. Every shadow deployment your marketing team stood up on their corporate credit card. Until you have this list, nothing else you do is real. Most of the organisations I walk into think they have 5 AI use cases. They have 40.

2

Classify against the EU AI Act risk tiers. For each use case, decide: unacceptable, high, limited, or minimal. Be honest. If your HR team is screening CVs with a third-party tool, that's high-risk. If your marketing team is using a foundation model to personalise emails, it's probably limited risk. Write it down. Get Legal to sign off.

3

Stand up a lightweight governance structure. You do not need a committee of 20. You need named accountability. A senior executive who owns AI risk. A practitioner who runs the control framework. A legal partner who owns the regulatory interpretation. Three people, clear RACI, monthly touchpoint.

4

Pick NIST AI RMF as your control framework. It's free, it's good, your auditors like it, and it maps cleanly to everything else.

5

Decide whether ISO 42001 certification is in your 2026 or 2027 plan. If you sell to regulated enterprise or the public sector, it probably should be. Budget for readiness work now.

6

Adopt the Australian Voluntary AI Safety Standard publicly if you have an ANZ presence. It's cheap, it signals the right thing to regulators, and it gets you ahead of the mandatory regime.

7

Run a targeted EU AI Act applicability assessment on your top five AI use cases. You are looking for two things: any use case in the prohibited bucket (stop it immediately), and any use case in the high-risk bucket (prepare for 2 August 2026).

None of this is rocket science. It is, however, work, and it doesn't do itself while you're in budget meetings.

A Word for the Vendors in the Room

If you build or sell AI, and you are reading this and thinking "this is somebody else's problem," I have news for you. Your enterprise customers are about to start asking you very specific questions. They will want your EU AI Act classification. They will want your ISO 42001 roadmap. They will want your NIST AI RMF mapping. They will want to see your data governance, your testing regime, your incident response plan, and your model documentation.

The vendors who have this ready in the next six months will win procurement cycles. The ones who don't will be quietly removed from shortlists and never told why. I have sat on both sides of those meetings. The difference between the winning vendor and the losing vendor was preparedness, not product.

The Bottom Line

AI legislation has moved from "interesting reading" to "the thing your board asks about every quarter" faster than any regulatory trend I've watched in twenty years. Four frameworks. Different origins, aligned trajectories. Enough fines attached to make serious boards sit up.

The good news is that the work is tractable. Build one programme, map four ways, and you will be ahead of 90% of your competitive set. The executives who get this right over the next year will find it turns from a compliance cost into a genuine commercial asset, the same way ISO 27001 and SOC 2 did a decade ago.

The ones who wait will be explaining to their board in August why their biggest European customer just terminated a contract, or why a regulator in Canberra just opened an inquiry. Neither is a fun conversation.

So tell me: where is your organisation actually at with this? Have you started the AI inventory? Have you picked a framework? Is anyone in your business losing sleep over 2 August 2026, or is that still somebody else's problem?

I'm especially interested in what APAC leaders are seeing on the ground. The noise coming out of Brussels and Washington is deafening. The signal from Sydney, Singapore, and Wellington is quieter and often more useful.

Robin Leonard

About Robin Leonard

Partner at Xenai Digital and APAC's leading enterprise Salesforce consultant with 250+ enterprise transformations.

Share this article:
TwitterEmail
Topics:AI GovernanceEU AI ActISO 42001NIST AI RMFAustralian AI Safety StandardRisk & ComplianceEnterprise AI

Enjoyed This Article?

Get weekly enterprise AI insights like this delivered to your inbox. Real strategies from 250+ Salesforce transformations across APAC.

Join 15,000+ enterprise leaders • No spam • Unsubscribe anytime

Ready to Apply These Insights to Your Enterprise?

Let's discuss how these strategies can transform your specific challenges into competitive advantages.

Explore more enterprise insights: