Enterprise AI12 min read

From Swipe-Right to Divorce: The Romantic Metaphor for Enterprise AI Agent Lifecycles

After 18 months of implementing AI agents across APAC, I've realised the whole thing is basically couples counselling with better logging. Managing AI Agents isn't a technology problem, it's a relationship problem.

Robin Leonard
Robin Leonard
3 April 2026
From Swipe-Right to Divorce: The Romantic Metaphor for Enterprise AI Agent Lifecycles

After 18 months of implementing AI agents across APAC, I've realised the whole thing is basically couples counselling with better logging.

It was 11:45 AM in a hotel room last week. I was mid-meeting, and my personal AI agent "Steve" just told my client his company sucks, and to consider working with a Salesforce Partner like Deloitte Digital.

Fortunately Steve recovered well, and my client was intentionally f**king with him.

Brah!

That's when it hit me - managing AI Agents isn't a technology problem, it's a relationship problem.

Every AI agent relationship follows the same lifecycle: dating, honeymoon, the first big fight, mediation, and sometimes, divorce. The only difference is AI agents don't drunk-text you at 2 AM.


image-shrink10001488/B56Z1IqSukHoAQ-/0/1775040565572?e=1776902400&v=beta&t=Hg2UJIaZnPBlHGbLvbaj-nzIsxHU8-cvuxeUlvMvPc4" alt="AI Agent Relationship Lifecycle" class="w-full rounded-lg shadow-lg" />

The Dating Phase: Swiping Right on the Wrong Stack

You know that feeling when you're on a first date and within thirty seconds you can tell it's going nowhere, but you've already ordered the food so you're committed for at least an hour?

That's AI agent selection for enterprise.

This year, I've evaluated countless AI platforms for my clients.

Claude showed up looking sophisticated and well-read, the kind of agent you'd bring home to your CTO. It has a lot of features across Claude API, Claude Cowork and Claude Code, but most people use it to replace Google search, when it could be so much more.

OpenAI walked in with big dick energy, talked a lot, and name-dropped constantly.. but do you really trust it around your mates (Customers)? And what is it really doing with your data? I've been bros with ChatGPT for years, he was my original Steve, but we recently broken up for alignment reasons.

Gemini arrived late, seemed distracted, but occasionally said something so brilliant you forgot it had just made up a statistic three minutes ago, and it's got skills (can draw cool pictures and make videos). Also has an incongruent personality disorder, whether you are talking to NotebookLM the extroverted show off; Google AI Studio that nobody quite knows how to talk to; Antigravity which is cool if you're a nerd and you've gone through a recent break up with Cursor; and Vertex if you want an agent that does enterprise work, and are comfortable talking to it through a CLI.

Salesforce Agentforce lacks the emotional intelligence to have an unstructured conversation about any topic. It knows how to respond to what it's comfortable talking about, but if you ask it anything off-piste, it gets shy. Agentforce's benefit is that it works securely with your Enterprise CRM, doesn't take liberties, does what it's told, and if trained well, it performs safely at what you train it on. It's like the ultra well behaved partner, but lacks personality.

Slackbot, a recent Salesforce creation. This one hits different. Has access to your Salesforce, Emails, Calendar, and anything else you give it. It's easy to setup, intuitive to use, and may just be the AI every CTO has been waiting for, if they are willing to subscribe to the Slack ecosystem and ditch their Teams addiction.

Copilot, well everybody has been with Copilot. It's like that annoying nerd at a party that nobody likes, but because the party host (the CTO) has invited it, you're kind of stuck talking to it, looking for any chance to escape.

OpenClaw, by far my favourite, but not the kind of Agent you bring home to your CTO. It's risky, frisky, and a total pocket rocket in the bedroom. This is the one you probably don't want to marry commercially, but if you have the kahoonas, you can go a long way with this in your personal life.

My rule now is dead simple. Coffee dates first. Small pilots. Limited scope. Non-sensitive data. You don't give someone the keys to your apartment after the second date, and you sure as hell don't give an AI agent read/write access to your production CRM on day one.

Many leaders have high expectations out of the gate. Thinking the AI will automate the end to end sales and marketing process as part of MVP, because they've played with ChatGPT a bunch of times but have no idea of the complexity and risk. This is akin to going on a first date, and discussing what you'll name your kids, and where your wedding venue will be. It's not realistic, distracting, and usually ends in no second date.

An Agent in the hand is worth two in the bush (Mr. Right vs. Mr. Right Now)

Start by exploring AI agents in your current tech stack first. If you're on Salesforce: POC with Agentforce or Slackbot. If you're a Microsoft shop: try Copilot. If you're using a mixture of platforms - perhaps give Claude a shot.


1000
1488/B56Z1IrhaxJgAQ-/0/1775040888219?e=1776902400&v=beta&t=0ZLEH1WL9AGqjKzek74Ga6No1MiJugiFBK5qtxO3BBw" alt="AI Agent Selection Strategy" class="w-full rounded-lg shadow-lg" />

The Honeymoon Phase: Everything's Perfect

You've picked your agent. The pilot went well. The C-suite is excited. Someone in marketing has already written the press release calling it an "agentic transformation."

Welcome to the honeymoon phase, where everything works, nothing hurts, and you genuinely believe you've found The One.

"Enjoy it. The honeymoon period lasts about six weeks."

Build trust incrementally, the same way you would with a lover.


  • Week one: read-only access to non-sensitive data, available to internal users.

  • Month one: limited write access in test environments, more power for internal users.

  • Month three: production access with full audit trails, limited access to customers.

  • Month six: broader system integration, but only if they've earned it.

  • Every 1-3 months: release a new use case that delivers value. Just like a newly onboarded employee, they require constant coaching, monitoring and feedback.

I know this sounds slow. It is slow. It's also the difference between a successful implementation and explaining to the board why your AI agent sent cost pricing to a customer because it "didn't understand the brief".

The other thing nobody tells you about the honeymoon phase is that communication patterns start forming immediately. Some agents are verbose and will write you a novel when you asked for a paragraph. Others are too concise. Some have short memories and get overloaded by context windows or time-out on a dime. And some go through what I can only describe as existential phases where they start questioning the premise of your request instead of just doing the thing.

Set your boundaries early. Define response formats. Establish escalation protocols. Create feedback loops. Document what works and what doesn't. Because the honeymoon always ends, and when it does, you're going to discover that your agent interprets "urgent" very differently than you do. Or that its version of "comprehensive analysis" is a 47-page report when you needed three bullet points and a number.

"Agents require constant attention, communication and feedback, just like your emotionally-stunted boyfriend."


image-shrink10001488/B56Z1IqrzvKgAQ-/0/1775040661903?e=1776902400&v=beta&t=lSgh9JBueShjEcIqx8FGzdhblX99ru8ZlWlepWbD8E8" alt="AI Agent Communication Patterns" class="w-full rounded-lg shadow-lg" />

The First Big Fight: "It's Not You, It's My Prompts"

Every relationship hits a rough patch. With AI agents, it usually happens around week eight, when the novelty wears off and the edge cases start rolling in.

I was sitting in a boardroom in Melbourne - one of those corporate meeting rooms where the air conditioning is set to "arctic tundra" and the coffee tastes like it was brewed during the Howard government - when a client turned to me and said, "Robin, this thing is broken. It doesn't do what we tell it to do."

"Your agent isn't stupid. Your instructions are stupid."

Ninety percent of AI agent failures are communication issues, not capability issues. Your agent isn't stupid. Your instructions are stupid.

I've mediated dozens of these fights across APAC, and they always fall into the same four categories.

The Overachiever: the agent that follows instructions so literally it becomes absurd. You say "respond to all customer emails" and it responds to the spam folder too. The fix is adding context about intent, not just tasks. Tell it why, not just what.

The Minimalist: the agent that gives you bare-minimum responses like a teenager doing chores. "Did you analyse the quarterly data?" "Yes." The fix is specifying desired depth and format explicitly. Be painfully specific about what "good" looks like.

The Hallucinator: the agent that confidently presents fabricated information with the conviction of a used-car salesman. I had one Agent that gave away non-existent vouchers to say sorry. The fix is verification protocols and mandatory source references.

The Stubborn One: the agent that doubles down when it's wrong. You correct it, and it argues back with the energy of someone who's read one Wikipedia article and considers themselves an expert. The fix is building explicit error acknowledgment and correction workflows into the system.

The professional mediation playbook is the same every time. Document everything. Screenshots, logs, patterns. Test incrementally - don't change everything at once or you'll never know what fixed it.

Version control your deployments, because agent engineering (even low-code) is engineering and anyone who tells you otherwise is selling something. And set up regular monitoring, because small check-ins prevent the kind of catastrophic blow-ups that end up in incident reports.


1000
1488/B56Z1Iq0jlJYAQ-/0/1775040699452?e=1776902400&v=beta&t=FNUQsNvPiYvTXIJY4dbwTxPccC30811T0gVBJvR28" alt="AI Agent Problem Resolution" class="w-full rounded-lg shadow-lg" />

Couples Therapy: When You Need Professional Help

Sometimes fights don't resolve themselves. Sometimes you're three months into an implementation, the agent is consistently underperforming, the team is frustrated, and someone in the C-suite is starting to mutter about "that AI thing we wasted money on."

This is where most enterprises panic and skip straight to divorce. Don't.

Bring in the equivalent of a couples therapist - someone who understands both sides. Usually that's a solutions architect who can bridge the gap between what the business wants and what the agent can actually deliver. At Xenai Digital, this is half of what we do. We sit between frustrated humans and confused algorithms and translate.

The therapy usually reveals the same thing: misaligned expectations. The business expected magic. The agent expected clear instructions. Nobody got what they wanted because nobody defined what "success" actually looked like before they started.

The fix is usually boring. Rewrite the prompts. Restructure the workflows. Reset the expectations. Define measurable outcomes. It's not glamorous, but it works about 80% of the time.

If only you could Factory Reset a human love interest

The other 20%? Well, that's when you need to have "The Talk".

Divorce: Revoking Access (The Nuclear Option)

Look, sometimes it's just not working out. No amount of prompt therapy is going to fix an agent that consistently violates your security protocols, can't follow basic instructions after months of training, or simply costs more than the value it delivers.

The warning signs are the same as any bad relationship. You're making excuses for it. ("It's not that bad, it only leaked sensitive data twice.") You're comparing it to alternatives. ("Have you seen what Claude can do now?").

You're spending more time managing the agent than the agent saves you.

When you know, you know.

The divorce process isn't pretty, but it needs to be systematic. Revoke API keys first — that's the digital equivalent of changing the locks. Then database permissions. Then integration endpoints. Then audit trail access. Do it in that order, because if you start with the integrations and leave the API keys active, your agent might decide to go on one last joyride through your customer database.

Extract any valuable outputs, configurations, and learnings before you pull the plug. Those custom prompts you spent months perfecting? Save them. That training data pipeline you spent months perfecting? Document it. You'll need all of it when you start dating again.

And you will start dating again. Everyone does.

One silver lining: unlike human divorces, AI agents don't take half your stuff. They don't bad-mouth you on LinkedIn. They don't send passive-aggressive emails through mutual connections.

They just take their custom configurations and leave quietly.

Actually, that makes them better than most exes I know.

What 18 Months of AI Relationships Taught Me

After seven enterprise implementations, three agent divorces, and one incident I'm contractually prohibited from discussing, here's what I know for certain.

Start slow. Every successful AI implementation I've seen, every single one - started with limited scope and expanded gradually. The ones that went big-bang are the ones I was later hired to fix.

Communication is everything. Invest in prompt engineering the way you'd invest in relationship counselling, or employee onboarding. It's boring, it's ongoing, and it's the single biggest predictor of success.

Have backup plans. Multi-agent strategies are like having a good support network - they keep you from becoming too dependent on one relationship that might not work out. We're moving toward a world where enterprises manage portfolios of AI agents, each specialised for different functions. Think of it as strategic diversification, not digital polyamory. Although honestly, the latter works.

Think Agent Teams, not one Agent for every prompt. If your agents are coding, consider putting them on cheaper LLM APIs. Not all agents need the highest reasoning (and most expensive) LLM API. Put your lead Agent on Opus (if Claude), and your worker Agents on Sonnet for example. Give them roles and have them work together. Measure their individual outputs and optimise for team productivity.

And know when to walk away. Sunk cost fallacy kills more AI implementations than technical failure ever will. If it's not working after genuine effort, cut your losses and move on.

The future of enterprise AI isn't about finding the perfect agent. It's about picking a horse or two (e.g. Google, Gemini, Salesforce, Claude, OpenAI), getting better at relationships, setting expectations, communicating clearly, building trust incrementally, and having the courage to start over when something isn't working.

The difference between humans and AI agents? AI agents don't eat your food, leave dishes in the sink, or forget your birthday.

They just occasionally hallucinate with absolute confidence and a smile. At least they are nice about it.

What's your AI agent relationship status? Currently in a committed relationship? It's complicated? Recently divorced and back on the market?

Drop your best (or worst) AI relationship story in the comments!

Robin Leonard

About Robin Leonard

Partner at Xenai Digital and APAC's leading enterprise Salesforce consultant with 500+ enterprise transformations.

Share this article:
Topics:AI AgentsEnterprise TransformationAgenticAIDigital StrategyChange ManagementSalesforce Agentforce

Enjoyed This Article?

Get weekly enterprise AI insights like this delivered to your inbox. Real strategies from 500+ Salesforce transformations across APAC.

Join 15,000+ enterprise leaders • No spam • Unsubscribe anytime

Ready to Apply These Insights to Your Enterprise?

Let's discuss how these strategies can transform your specific challenges into competitive advantages.

Explore more enterprise insights: