To Be Truly Agentic, Your Organisation Needs a Shared Brain
Most enterprise Claude rollouts are 220 seats, an $80,000 invoice, and one analyst in finance who's using it like a very expensive spreadsheet. Here's what good looks like, what bad looks like, and how to govern the thing without sucking the life out of it.
To Be Truly Agentic, Your Organisation Needs a Shared Brain
Most enterprise Claude rollouts are 220 seats, an $80,000 invoice, and one analyst in finance who's using it like a very expensive spreadsheet. Here's what good looks like, what bad looks like, and how to govern the thing without sucking the life out of it.
What Most People Get Wrong About AI in Organisations
If you spend any time on YouTube these days, the algorithm will serve you an array of solopreneurs telling you how they replaced their entire team with Claude, an MCP server, and a Notion database. Most of these creators are running an audience of one. Themselves. Their workflow involves no compliance officer, no shared customer data, no Salesforce admin asking awkward questions about field-level security, and no CFO asking why the inference bill went up four hundred percent last quarter.
That content is fun. It is also almost completely useless if you run, sit on the executive of, or report to the board of a medium-to-large organisation.
The conversation that nobody is having on YouTube is the one I'm having every other week with CIOs, transformation leads, and Heads of Data: how do you actually roll AI out across an organisation of fifty to two thousand people without it becoming either a) a very expensive version of ChatGPT that nobody uses properly, or b) a sprawling mess of unsanctioned automations that pipes your customer PII through somebody's personal API key on a Thursday night?
Both outcomes are extremely common. According to research published in early 2026, seventy-nine percent of IT leaders have encountered unauthorised AI deployments inside their own organisations, and shadow AI is now adding an average of US$670,000 to breach costs (The Hacker News). Worker access to AI rose fifty percent in 2025, yet only one in five companies has a mature governance model to oversee how it is actually being used (Vectra AI). MIT's Project NANDA found ninety-five percent of generative AI pilots in enterprise are still delivering zero P&L impact (Fortune). Gartner expects forty percent of agentic AI projects to be cancelled by the end of 2027 because of "escalating costs, unclear business value, or inadequate risk controls" (Gartner).
That isn't a model quality problem. The models are extraordinary. It is a deployment, governance, and operating-model problem. And if you're an enterprise leader, it is your problem.
Jack Dorsey Was Right About the Shared Brain. He Was Just Wrong About Twitter.
Years ago, Jack Dorsey famously called Twitter "the closest thing we have to a global consciousness" (X.com). It was a slightly unhinged thing to say about a website best known for celebrities arguing with sandwich chains, but the underlying instinct was right. Humans are quite good at making knowledge in our individual heads. We are spectacularly bad at making it accessible to anyone else.
In May 2024, Dorsey conceded he'd been wrong about which platform would deliver on that vision. _"I once thought Twitter was the closest form of global consciousness,"_ he said. _"Now it seems the corporate AI models have become that. They have far more access to public and private thoughts and questions than any social platform ever did"_ (Entrepreneur).
More recently, on the Sequoia podcast, he went further again, arguing that any company can now be reorganised around an AI "intelligence layer" at the centre, with a small ring of humans at the edge, and that this gives every company the chance to operate as a "mini-AGI" (Sequoia Capital).
You can take or leave the visionary cosplay. The structural insight matters. The point of putting Claude into your organisation isn't to give two hundred people a clever chatbot they each use in private. The point is to give the organisation itself a brain that every individual can plug into, that learns from how the work is actually done, and that compounds as more of the company connects to it.
Most organisations are doing the opposite. They've issued Claude seats. They've sent a launch email. They've rolled out a one-hour training. Then they've gone back to their normal jobs and wondered why the bill keeps growing while the productivity numbers don't.
_A shared brain is not a licence. It is an operating decision._
What Good Looks Like
Let me get specific, because I think the marketing copy on this stuff has gone sufficiently fluffy that we owe each other concrete examples.
Agentic BI on top of a data warehouse. Claude was wired into my Client's Snowflake via a vetted MCP connector with row-level security inherited from Snowflake. Internal staff can now ask questions like, _"What's our gross margin on the Sydney CBD store over the last six quarters, and what's driving the change?"_ and get a defensible answer back, with the SQL it ran, in about twenty seconds. Before, that question went to the analytics team, joined a queue, and came back as a static PowerPoint two and a half weeks later. Same answer. Different operating tempo. Cost per query is fractional. Cost of not answering the question used to be invisible and enormous.
The trick wasn't the model. The trick was the governance. They built an internal semantic layer so the model didn't hallucinate column names. They logged every query, every prompt, and every result. They restricted the dataset by department. They trained finance, ops, and the exec team on how to push back when the model gives them rubbish (it sometimes does). And they gave the data team a kill switch they could use without lodging a JIRA ticket.
One client hooked OpenAI Codex up to their Salesforce org. This one's my favourite. Salesforce dev is, traditionally, a slow-moving, ticket-driven affair. Their team has now wired Codex into their CI pipeline and into the Salesforce metadata API. The model writes Apex, generates test classes that actually exceed the seventy-five percent code coverage requirement, opens pull requests, runs deploys to a sandbox, runs the regression suite, and only escalates to a human when something legitimately needs judgement. They've gone from a three-week change cycle to a two-day change cycle for non-material work. Deloitte and the New York Stock Exchange are doing the same thing at much larger scale, with NYSE's CTO openly describing his organisation as having _"rewired our engineering process"_ with Claude Code, building agents that can take an instruction from a Jira ticket all the way to a committed piece of code (Anthropic Economic Index).
The trick again wasn't the model. It was that nothing got deployed without going through the existing review pipeline, the existing test suite, and an existing senior engineer who could veto in one click. The agent did the typing. The humans kept the keys to production.
How I personally use the Claude API. None of these are revolutionary, but they're worth mentioning because they're the right shape for a small team or an individual who wants to start small. My openclaw, who I've named Steve and who I cannot in good conscience continue defending at dinner parties, runs locally and uses the Claude API to do things like draft this article, push commits to my website repo, and pull data out of CSVs. The chat agent on robinleonard.co also runs on the Claude API and answers visitor questions about my services, with a clear handoff to me when somebody actually wants to talk to a human.
In each case, I'm paying for inference on the API rather than seats. I have one billing line, one set of audit logs, and one place to revoke access if anything goes sideways. That same architectural decision scales up. If you're an enterprise leader thinking about Claude, the question isn't _"should we buy seats?"_ The question is _"where does inference live, who controls it, and what's it allowed to touch?"_
What Bad Looks Like
The classic bad implementation has four flavours, and most large enterprises are running at least two of them simultaneously.
Flavour one: Claude as expensive ChatGPT. You bought Claude for Enterprise. You distributed seats. You sent a launch email. Six months later, the data shows ninety percent of your usage is people asking it to summarise PDFs and write better-sounding emails to their colleagues. Useful. Not transformative. Definitely not worth the procurement effort.
Flavour two: shadow AI sprawl. You didn't buy Claude for Enterprise. Or you did, but only for a small team. So your finance team is running it through one consumer subscription, your marketing team has wired up an unsanctioned automation through Zapier and somebody's personal API key, and your engineering team is using Cursor with a credit card the CTO hasn't reviewed in eight months. You have no consolidated audit trail, no DLP, no idea what data has been sent where. This is the configuration that contributed to that US $670,000 in additional breach cost.
Flavour three: agentic cowboy projects. Some bright spark in the business built an agent that, when triggered, fires off emails to customers, updates CRM records, and books meetings. They didn't tell IT. They didn't tell legal. The agent is running on a developer's laptop at home, using their personal Anthropic API key, and writing into your production Salesforce org via an integration user that has system administrator permissions because that was easier than configuring a profile. When that breaks, and it will, the answer to _"who owns this?"_ will be silence followed by an HR conversation.
Flavour four: the procurement bear hug. You did the responsible thing. You set up a centralised Claude programme. You also set up an approval process so onerous that nobody can use it for anything spontaneous, you blocked all MCP connectors, you locked the model to a single use case, and you've now achieved compliance and zero adoption. Congratulations. You've built the world's most secure off switch. People are using ChatGPT on their phones now and you can't see any of it.
The right answer is none of those four. The right answer is a deliberately designed, centrally governed, broadly accessible shared brain, deployed against the workflows that actually matter, with guardrails that allow everybody to sleep at night.
The Governance Layer Nobody Wants to Build
Here's the part of the article that is least fun and most important. If you take nothing else away from this piece, take this section.
Anthropic has, as of 2026, a proper enterprise governance posture that didn't exist eighteen months ago. The Claude Enterprise plan now includes SSO, SCIM, audit logs, custom data retention, the Compliance API for programmatic usage and content access, role-based access controls, group spend limits, expanded OpenTelemetry support, and per-tool connector controls (TechRepublic, Anthropic, 9to5Mac). Data in transit is TLS 1.2 or higher, data at rest is AES-256, and your data is not used for training under the Enterprise terms. The platform has SOC 2 and ISO certifications. Functionally, this is now in roughly the same posture as the rest of the enterprise SaaS stack.
That's necessary, but it's not sufficient. The bigger governance gap, the one that causes the real incidents, is the Model Context Protocol layer. MCP, which Anthropic open-sourced in November 2024 and which has since become the de facto integration standard for agentic AI, defines how models talk to tools and data sources. What it does not define is who is allowed to call what, when, with whose data, and with what authority (Kiteworks).
That distinction is the difference between a working shared brain and an incident report. Security researchers have already documented prompt injection, tool permission exploits, and a delightfully named Tool Poisoning Attack class targeting MCP deployments (arXiv, tray.ai). If you let your business teams install MCP servers from random GitHub repos, you are giving anyone with a publishable connector and a bit of patience a foothold inside your agentic stack.
The pragmatic enterprise pattern emerging is an internal MCP registry. You curate a list of vetted connectors. You block direct installation from public sources. You run new connectors through an automated and manual security review, including SBOM analysis, malware scans, and a check that they don't quietly exfiltrate everything they see. Cloudflare and Strategy.com both have published reference architectures for this if you want a starting point (Cloudflare, Strategy.com).
This is the part nobody on YouTube is talking about. It is also the part that decides whether your AI programme is worth the budget or whether it's a Gartner statistic in eighteen months.
How to Actually Create a Shared Brain
Right. Practical guidance section. If you're rolling AI out into a real organisation, this is what I'd do.
Pick your poison. Anthropic now sells Claude in three meaningfully different shapes: Claude Cowork (the desktop agentic product for knowledge workers, file work, and connector-driven tasks), Claude Code (the terminal-and-IDE tool for engineering teams, now bundled into the business and enterprise plans), and the Claude API (for everything you build yourself, including in-app chat, agents, batch processing, and bespoke integrations). Most organisations need a combination of all three. Decide which workforces get which, and stop pretending one product covers everyone. Then OpenAI has Codex, Google has Vertex — there are new enterprise-friendly harnesses coming out every day.
Centralise inference, decentralise use. Every byte of Claude inference your organisation pays for should flow through a single, controlled commercial relationship. That gives you one consolidated bill, one audit trail, one DLP boundary, and one security posture. Inside that perimeter, let teams build. Outside it, have a clear policy. Shadow inference is shadow IT with a sharper edge.
Build an internal MCP registry. Ban anything else. Curate the connectors your organisation is allowed to use. Salesforce, Snowflake, Box, Confluence, Jira, Workday, ServiceNow, Microsoft 365, Google Workspace, Tableau, and the half dozen other things your business actually runs on. Wrap each connector with the same row-level and field-level security your existing systems already enforce. Put new connectors through a security review with the same seriousness you'd put a new SaaS vendor through. Block public MCP installs. If you don't, somebody else will install them anyway.
Connect to the warehouse, not to spreadsheets. The single highest-leverage move I've watched clients make is wiring their AI harness (e.g. Claude) into the data warehouse. Snowflake, BigQuery, Databricks, Fabric, take your pick. The minute the model has clean access to governed, semantic-layered data with the right permissions, your finance team, ops team, and exec team can ask English-language questions and get defensible answers. This is the thing Dorsey was actually pointing at. This is your shared brain.
Define personas, not licences. Don't issue seats per cost centre. Define a small number of personas (engineer, analyst, marketer, exec, customer-facing, ops, support) and give each persona a default set of tools, connectors, and prompt scaffolding tuned for their work. This dramatically reduces the cold-start problem. Most users don't fail to use Claude because they don't have a seat. They fail because they don't know what to do with the empty text box.
Measure outputs, not seats. Seat utilisation is a vanity metric. The metric that matters is what your people are now able to do that they couldn't do before. Cycle time on a recurring report. Time to first answer on a customer enquiry. Lines of production code shipped per engineer per week without an increase in defect rate. Pick three, instrument them, and benchmark before and after. If you can't show movement in twelve weeks, change the approach.
Train the human side properly. Especially the pushing-back bit. Most internal AI training I've seen is one webinar on prompt engineering, then crickets. The actual skill people need is not better prompts. It's the discipline to read the model's output critically, push back when it's wrong, and stop the agent before it does something stupid. The model is trained to be agreeable. Agreeable is not the same as correct. The corrective is human judgement, applied consistently. Bake that into your training, your KPIs, and your culture.
Have a kill switch and a real audit trail. Every deployed agent, every connector, every API key needs an owner, a documented purpose, a kill switch that can be triggered without legal sign-off, and an audit log that an incident response team could actually use. If you can't, in five minutes, tell me what your finance team's Claude agents are touching today, you don't have governance. You have a hope.
Review the bill quarterly. Look for the analyst running it like Excel. Usually there's at least one outlier user accounting for thirty to sixty percent of token spend, usually doing something low-leverage they could have done in Excel for free. That's not their fault. That's a sign your operating model isn't directing usage toward the high-value workflows. Fix that, and you'll halve your bill or double your output.
Where Claude Plugs Into Your Existing SaaS Stack
Quick run through the common integration patterns I'm seeing across APAC enterprises in 2026, since this is the question that comes up in every steerco I'm in.
Salesforce. Engineering teams use Claude Code to write Apex, build flows, generate Lightning components, and run regression suites. Both patterns work well if your security model is solid and your sandboxes are set up properly.
Slack and Teams. Internal chatbots, channel summarisers, and on-call triage. Most usefully built directly on the Claude API rather than through a third-party wrapper, which means one fewer vendor in your stack.
Microsoft 365 and Google Workspace. Calendar, mail, Drive, SharePoint. Useful for scheduling, document drafting, meeting prep, and post-meeting summaries. Get the connector controls right, because the blast radius of a compromised mailbox connector is large.
Snowflake, BigQuery, Databricks, and Fabric. Agentic BI, as discussed. The biggest unlock for most organisations.
Jira, Linear, Asana, ClickUp, Monday. Story drafting, status summarisation, automated triage, and (if you trust the model and have human review) auto-prioritisation. Particularly useful for product and ops teams.
ServiceNow, Workday, NetSuite. Workflow assistance, policy lookup, automated ticket triage, and HR/finance Q&A. High value, high blast radius. Enforce role-based access carefully.
Confluence, Notion, Box, SharePoint. Knowledge base search and synthesis. Combined with proper retrieval and source attribution, this is where you get genuinely useful institutional memory. Without source attribution, you're inventing facts and citing them confidently.
Customer support tools. Zendesk, Intercom, Freshdesk, Kustomer. Drafting responses, summarising case histories, escalating intelligently. Watch the customer data flow carefully and check your DPAs.
Marketing tools. HubSpot, Marketo, Klaviyo, Braze. Content drafting, segmentation, campaign analysis. Tighter than you'd think, especially if you connect Claude to your CDP.
In every one of those, the question is the same. Who owns the data flowing through the connector? Who reviews the output? What happens when it's wrong? If you can't answer those three, you don't deploy.
Bottom Line for APAC Enterprise Leaders
The organisations getting real value out of agentic AI in 2026 are not the ones with the most seats. They are the ones who have made three quietly unglamorous decisions.
They've centralised inference under one commercial relationship and one set of audit controls. They've connected Claude to their warehouse and their core systems through a curated MCP registry, not a free-for-all. They've built personas, training, and measurement around the workflows that move actual numbers, and they've put humans in the loop on anything that touches a customer or a financial system.
That is dramatically less sexy than the YouTube content. It is also what separates the eighty percent of enterprise AI programmes that quietly fade out from the twenty percent that meaningfully change how a company runs.
If you're a CIO, CTO, Chief Transformation Officer, or Head of Data in APAC and your AI rollout currently looks more like the bank CIO's dashboard than the patterns above, the good news is that the fix is not technical. It is organisational. You can start fixing it on Monday.
References And Further Reading
---
Originally published on LinkedIn on 16 May 2026.
Robin Leonard is a Partner at Xenai Digital, an APAC enterprise Salesforce and AI consultancy. 9x Salesforce certified, with form leading enterprise transformations across Australia, New Zealand, Singapore, Japan, and the broader Pacific. Splits his time between Auckland, Sydney and Tokyo, and rides a Royal Enfield Himalayan 450 when the weather agrees with him. linkedin.com/in/robinleonard1

