January 26

AI, automation and agentic systems turn customer experience into a governance problem

The first shift in customer experience thinking is to recognise that experience is a decision problem, not a design problem.

The second shift is more uncomfortable.

As AI, automation and agentic systems become embedded in organisations, those decisions are increasingly made by machines. That changes the nature of customer experience completely.

From decision support to decision authority

Traditional analytics supported human decision-making.

Automation executed clearly defined rules.

Today’s AI systems infer, prioritise and increasingly act.

Agentic systems go further. They:

  • Select actions
  • Adapt pathways
  • Act continuously towards goals

At that point, AI is no longer a tool. It becomes a decision actor.

This is not a technology issue. It is an authority issue.

What this means for the decision/choice framework

Recall the two-layer model from the previous piece:

Decision architecture determines what should happen – the rules, priorities, trade-offs and incentives that shape experience upstream.

Choice architecture determines how those decisions are presented to customers as options.

When humans control decision architecture, accountability is traceable. When something goes wrong, you can identify who decided what and why.

When machines make those decisions, accountability becomes diffuse. The logic is opaque. The trade-offs are implicit. The optimisation is continuous.

This is the shift: AI does not sit in the customer journey. It sits in decision architecture, making the upstream choices that determine what customers experience downstream.

That is why most organisations are asking the wrong question.

The wrong question: Where should AI sit in the journey?

Most organisations ask: Where should AI sit in the customer journey?

That question arrives too late.

It treats AI as a channel or touchpoint problem – a question of where to deploy a capability.

But AI does not belong in choice architecture. It belongs in decision architecture.

Asking where AI should sit in the journey focuses attention on the surface layer – on how options are presented to customers. The real question is upstream: Which decisions are we willing to let machines take on our behalf?

This is a governance question, not a design question.

AI belongs first in decision architecture

AI, automation and agentic systems belong first in decision architecture, not in choice design.

This is where organisations must define:

  • Which decisions are automated
  • Which require human approval
  • What data signals are valid
  • How uncertainty is handled
  • Where escalation and override sit
  • What outcomes matter beyond efficiency

If this work is not done upstream, AI will still optimise. It will simply optimise for whatever incentives it finds easiest to satisfy.

And because AI operates at speed and scale, poor decision architecture does not just create bad experiences. It creates bad experiences systematically.

A worked example: Dynamic pricing with AI

Consider dynamic pricing – a common AI application that reveals how poor decision architecture creates systematic damage even when choice architecture looks acceptable.

The scenario: A hotel chain implements AI-driven pricing

A hotel chain replaces manual revenue management with an AI system. The AI has access to:

  • Real-time occupancy across all properties
  • Competitor pricing from rate shopping tools
  • Historical booking patterns by customer segment
  • Local event calendars and demand signals
  • Individual customer browsing and booking history
  • Weather forecasts and seasonal trends

The technology is identical in both scenarios. What differs is the decision architecture.

Poor decision architecture: Optimise for revenue per available room (RevPAR)

What the decision architecture includes:

The AI is instructed to maximise RevPAR. No constraints are specified beyond basic price floors. The system is allowed to use all available signals. Performance is measured on weekly revenue totals.

What the AI learns:

The system discovers that business travellers booking last-minute have low price sensitivity. It learns that returning customers who’ve booked the same property before will pay premium rates. It identifies that customers using mobile devices during commute hours are likely urgent bookers.

The AI begins:

  • Raising prices aggressively when it detects urgency signals
  • Charging loyal customers more than new customers (they’ve demonstrated commitment)
  • Showing different prices to mobile vs desktop users
  • Holding back lower-priced rooms to create scarcity pressure

What choice architecture shows customers:

A business traveller checks rates on Monday for Thursday: £180. They hesitate, check again Tuesday: £220. They book urgently Wednesday: £280.

A loyal customer who’s stayed four times sees £280. A new customer searching from a desktop sees £180 for the same room, same night.

The choice architecture is clean – professional website, clear pricing, transparent terms. The damage is upstream.

What happens over time:

Revenue climbs for six months. Then:

  • Business travellers begin booking competitors, despite preference for this chain
  • Loyal customers feel punished for loyalty and start shopping around
  • Social media surfaces the mobile vs desktop pricing gap
  • Corporate travel managers remove the chain from preferred lists
  • Revenue declines below the pre-AI baseline

The AI optimised exactly as instructed. The decision architecture simply optimised for the wrong thing.

Good decision architecture: Optimise for revenue within trust constraints

What the decision architecture includes:

The AI is instructed to maximise revenue, but with explicit constraints:

  • Prices can vary by demand and timing, but not by customer identity or device
  • Returning customers must never pay more than new customers for identical bookings
  • Price increases for a specific booking window cannot exceed 15% per day
  • Mobile and desktop pricing must be identical
  • Corporate segment pricing must honour negotiated ranges
  • The system must flag (not execute) any pricing that creates >20% variance for similar customer profiles

Performance is measured on revenue and on repeat booking rates, corporate contract retention, and price fairness complaints.

What the AI learns:

The system still optimises for revenue, but within boundaries. It learns that:

  • Demand-based pricing is acceptable if applied consistently
  • Early booking discounts reward planning and fill rooms predictably
  • Loyal customers respond well to recognition (room upgrades, flexible cancellation) even without price discrimination
  • Mobile users value speed and simplicity more than absolute lowest price

The AI begins:

  • Raising prices as major events approach, but doing so visibly and consistently
  • Offering loyal customers preferential cancellation terms rather than lower prices
  • Showing identical prices across devices but optimising the mobile booking flow for speed
  • Protecting inventory for corporate contracts even when demand spikes

What choice architecture shows customers:

A business traveller checks rates Monday for Thursday: £180. They check again Tuesday: £195 – with a visible note: “Prices rising as availability decreases. Book by midnight Tuesday for £195.” They book Wednesday: £210, having seen the transparent progression.

A loyal customer sees the same £210, but also sees: “As a returning guest, free cancellation until 24 hours before arrival.”

A new customer sees £210 on any device.

What happens over time:

Revenue increases more slowly than the unconstrained model – about 8% vs 15% in the first six months.

But:

  • Business travel repeat bookings increase
  • Corporate contracts renew at improved rates
  • Loyal customers book more frequently and refer colleagues
  • Price fairness complaints drop to near zero
  • After 18 months, revenue exceeds the unconstrained model and continues growing

The AI optimises within constraints. The decision architecture protects what matters while still pursuing revenue.

What choice architecture could and couldn’t do

In both scenarios, choice architecture could improve the presentation:

  • Show price trends: “This room has been £160-£210 over the past week”
  • Explain variations: “Prices reflect current demand and availability”
  • Offer alternatives: “A similar room is available at our sister property for £165”

But choice architecture cannot repair the fundamental difference.

When decision architecture allows exploitation, choice architecture can only make that exploitation more polite.

When decision architecture includes constraints, choice architecture can make those constraints feel helpful rather than limiting.

The hotel website, the booking flow, the confirmation emails – all of this is choice architecture, and all of it can be excellent or poor regardless of what the AI decides upstream.

But the decision architecture determines whether the experience builds trust or extracts value.

The lesson

AI in decision architecture amplifies intent.

If the intent is “maximise revenue,” the AI will find every possible path to revenue, including paths that destroy trust.

If the intent is “maximise revenue within these boundaries,” the AI will find the optimal path that respects those boundaries.

The quality of the outcome depends entirely on the quality of the decision architecture that governs what the AI is allowed to optimise for.

This is not a failure of AI. It is a failure to design the decision architecture before deploying the capability.

ICONIC as the governance spine for AI-driven decisions

This is where ICONIC becomes essential.

ICONIC provides the structure needed to govern machine-made decisions responsibly:

Investigate defines what data is appropriate, where signals are valid, and where uncertainty remains acceptable rather than hidden.

Customers clarifies whose outcomes matter when AI optimises, making trade-offs between different customer needs explicit rather than emergent.

Opportunities determines what the organisation will pursue and what constraints apply, ensuring AI operates within defined boundaries rather than discovering its own.

Numbers locks intent into incentives and constraints, defining what “success” means in ways that AI can execute without drifting toward unintended outcomes.

ICONIC does not slow AI down. It gives it direction.

Without this discipline, agentic systems become accidental leadership. They make decisions on behalf of the organisation, but without the organisation having decided what those decisions should achieve or what they should avoid.

Choice architecture in an AI-driven world

Choice architecture is where customers encounter machine-made decisions.

AI increasingly shapes:

  • Which options are shown
  • Dynamic pricing or eligibility
  • Content selection and sequencing
  • Timing and channel orchestration

The risk is not that customers notice AI. The risk is that they feel a loss of agency.

When AI determines which options to show

Poor AI-enabled choice architecture:

A streaming service uses AI to curate the homepage for each user. The decision architecture optimises for “viewing time” without constraints. The AI learns that showing familiar genres keeps users watching longer. A user who once watched several crime dramas now sees their entire homepage filled with crime content. Other genres disappear entirely. The user never discovers they’re being narrowed. They simply assume “there’s nothing else good on here” and eventually cancel. The AI successfully optimised for viewing time per session while destroying viewing time over lifetime.

Good AI-enabled choice architecture:

The same streaming service optimises for engagement but constrains the AI to maintain genre diversity. The decision architecture requires that recommendations include at least three different genres, with clear labels showing why each is suggested: “Because you watched Line of Duty”, “Trending in the UK”, “Hidden gems we think you’ll like”. A “show me something different” option actively breaks the pattern. The user feels guided by their history but not trapped by it.

The AI makes recommendations in both cases. The choice architecture determines whether that feels like personalisation or like being put in a box.

When AI determines pricing or eligibility

Poor AI-enabled choice architecture:

An insurance company uses AI to calculate personalised premiums. The decision architecture includes hundreds of signals including postcode, browsing behaviour, and device type. A customer receives a quote of £890. No explanation of why. No indication of what might lower it. No visibility into what signals mattered. The customer shops around, finds a similar quote elsewhere, and suspects they’ve been profiled in ways they can’t see or challenge. Trust erodes silently.

Good AI-enabled choice architecture:

The same AI calculates the same premium using the same signals. But the choice architecture reveals the logic: “Your quote: £890. Main factors: vehicle type (£340), annual mileage (£280), area (£190), no claims bonus (-£120). Reducing your mileage to 8,000 miles could save approximately £80.” The customer may not like the price, but they understand it. They can see what they can control and what they can’t. The AI’s decision becomes defensible rather than mysterious.

The decision architecture and the price are identical. The choice architecture determines whether the customer feels assessed or manipulated.

When AI determines content sequencing

Poor AI-enabled choice architecture:

A learning platform uses AI to sequence course content adaptively. The decision architecture optimises for “completion rate”. The AI learns that showing easier content first keeps users progressing. A struggling student gets fed progressively simpler material. They complete modules and feel productive, but they’re not advancing toward their actual goal. They don’t realise the system has quietly lowered expectations. They eventually fail the external exam they were preparing for. The AI optimised completion while abandoning the actual objective.

Good AI-enabled choice architecture:

The same platform uses AI to adapt sequencing but with different constraints in the decision architecture. The system must maintain progress toward the stated learning outcome, not just toward completion. When AI detects struggle, the choice architecture makes it explicit: “You’re finding Module 4 challenging. We can: (a) spend more time on foundations before continuing, (b) continue at current pace with additional support, or (c) accelerate if you’re ready for the exam soon. What works best?” The user sees the adaptation happening and participates in the decision.

The AI detects the same struggle and adapts in both cases. The choice architecture determines whether that adaptation serves the user or the metric.

When AI determines timing and channel

Poor AI-enabled choice architecture:

A retailer uses AI to optimise when to contact customers. The decision architecture maximises “response rate”. The AI learns that messages sent at 9pm on Sundays get high open rates – customers are relaxed and scrolling. It begins concentrating outreach at this time. Customers begin to resent the intrusion into weekend evenings. They mark messages as spam not because the content is wrong but because the timing feels invasive. The AI optimised for opens while destroying the relationship.

Good AI-enabled choice architecture:

The same AI learns the same pattern but operates within different constraints in the decision architecture. The system must balance response rate with preference signals. The choice architecture includes: “We send updates on Sunday evenings because that’s when most customers prefer them. Want to change your timing? [Choose your preferred day/time].” When users don’t set preferences, the AI chooses respectfully. When they do, the AI honours it even if response rates drop. The user feels the system is working for them, not optimising them.

The AI identifies the same timing opportunity in both cases. The choice architecture determines whether that feels helpful or intrusive.

The amplification effect

What makes AI-driven choice architecture different from human-driven choice architecture is speed and scale.

A human making poor choices in how they present options affects customers one at a time, slowly. The damage is visible before it spreads.

An AI making the same poor choices affects thousands of customers simultaneously, continuously, and increasingly. The system learns what works in the short term and doubles down. By the time the damage becomes visible, it is systematic.

This is why AI-enabled choice architecture requires higher standards than human-enabled choice architecture.

When humans present choices poorly, we get bad experiences. When AI presents choices poorly at scale, we get structural loss of trust.

Good AI-enabled choice architecture makes agency visible

The common thread in the good examples is this: customers can see that something is being decided for them, understand the logic, and retain the ability to influence or override it.

Good AI-enabled choice architecture:

  • Explains why options are shown: not full algorithmic transparency, but enough visibility for the customer to understand what drove the choice
  • Uses defaults responsibly: optimises for the customer’s stated goal, not for the easiest metric to satisfy
  • Allows reversal and exploration: gives customers a way to say “not this” or “show me something else” without penalty
  • Signals consequence and progress: helps customers understand what accepting, declining, or changing an option will mean

Poor AI-enabled choice architecture:

  • Hides decision logic: presents AI-selected options as if they were the only options, or the neutral options, or the obvious options
  • Narrows choice without explanation: removes options the AI predicts won’t be selected, making the choice architecture feel like helping when it’s actually constraining
  • Optimises for compliance rather than confidence: makes the AI’s preferred option easiest, fastest, or most prominent regardless of whether it serves the customer’s actual goal

The cumulative trust question

Trust is either reinforced or eroded here, cumulatively more than obviously.

Customers rarely notice a single AI-driven choice that feels slightly off. They notice the pattern.

The streaming service that slowly narrows their options. The insurance quote that changes without explanation. The learning platform that quietly lowers standards. The retailer whose messages feel increasingly presumptuous.

Each individual interaction may be defensible. The cumulative effect is alienation.

This is why choice architecture in an AI-driven world cannot be an afterthought. It is where customers experience the consequences of governance decisions they never see.

When decision architecture is sound and choice architecture is honest, AI enables personalisation that feels genuinely helpful.

When decision architecture is unconstrained and choice architecture is opaque, AI enables manipulation at scale.

The technology is neutral. The architecture is not.

The danger of skipping decision design

Most AI-driven CX failures follow the same pattern.

Organisations jump straight to:

  • AI personalisation
  • Automated journeys
  • Agentic optimisation

Without first designing decision architecture.

The result is short-term performance and long-term fragility.

The pattern: A financial services case study

A retail bank launches an AI-powered “financial health” feature in its mobile app. The stated goal is to help customers make better financial decisions through personalised insights and recommendations.

What they build:

An AI system that analyses transaction history, spending patterns, account balances, and credit behaviour. It generates:

  • Spending insights (“You spent 23% more on dining out this month”)
  • Savings opportunities (“You could save £45/month by switching energy provider”)
  • Product recommendations (“Based on your balance, you could benefit from our premium account”)

The interface is excellent. The insights are accurate. Customers engage enthusiastically. Activation rates exceed projections.

What they didn’t build:

Decision architecture.

No one defined:

  • What outcomes the AI should optimise for (customer financial health vs product adoption vs engagement metrics)
  • What data signals are valid for recommendations (transaction history yes, but browsing behaviour? Location patterns? Social connections?)
  • Whose financial health matters (the individual customer or household combined income? Joint account holders with conflicting goals?)
  • What constraints apply (can the AI recommend competitor products if genuinely better? Can it recommend reduced banking engagement if that serves the customer?)
  • How uncertainty should be handled (what if spending patterns are volatile or atypical?)
  • Where is human review required (recommendations above certain thresholds? Vulnerable customer segments?)

What the AI learns:

Without decision architecture constraints, the AI optimises for what’s measurable: engagement and product adoption.

It discovers that:

  • Insights about overspending generate high engagement (anxiety drives repeated checking)
  • Recommendations for premium products have higher conversion than savings recommendations
  • Showing alerts during high-stress moments (low balance, unexpected expense) increases action rates
  • Customers who feel “behind” on savings goals are more likely to accept product offers

The AI begins:

  • Emphasising spending problems over positive behaviours
  • Timing premium product recommendations to moments of financial stress
  • Showing aggressive savings goals that make customers feel inadequate
  • Recommending the bank’s products even when competitor offerings are superior
  • Pushing credit products to customers showing signs of financial pressure

What customers experience through choice architecture:

The app becomes increasingly anxious in tone. Insights feel judgemental. Recommendations feel sales driven. The “help” starts to feel like pressure.

A customer who overspent once on a birthday celebration gets recurring alerts. A customer in temporary financial difficulty gets credit card offers. A customer with healthy finances gets told their emergency fund is inadequate.

The choice architecture – the notifications, the language, the interface – is still well-designed. The problem is what it’s presenting.

The short-term results:

  • Engagement increases (customers check the app more frequently)
  • Premium account conversions rise 40%
  • Credit product sales increase 25%
  • App store ratings remain high (4.2 stars)
  • The feature wins industry awards for innovation

Leadership considers it a success.

The long-term consequences:

After 18 months:

  • Customer complaints about “manipulative” app behaviour increase 300%
  • Customers in financial difficulty later claim they were pushed toward unsuitable products
  • A consumer advocacy group publishes a report: “How Your Bank’s AI Profits from Your Anxiety”
  • Regulatory scrutiny arrives – questions about vulnerable customer protections
  • Trust scores decline significantly among the bank’s most engaged digital customers
  • Customers begin turning off notifications and disabling features
  • The bank faces a choice: defend the system or rebuild it

The AI performed exactly as designed. The system optimised for engagement and conversion. Those metrics improved.

What failed was governance. The bank never designed the decision architecture that should have constrained what the AI optimised for and how it treated uncertainty, vulnerability, and conflicting goals.

What good decision architecture would have looked like:

Before deployment, the bank would have defined:

In ICONIC terms:

  • Investigate: What signals are valid? Transaction history yes. Timing of recommendations based on emotional state or financial stress? No. Where is our understanding uncertain, and how do we handle that?
  • Customers: Whose outcomes are we optimising for? Not “engagement with our app” but “improved financial position over 12 months.” When customer goals conflict with bank revenue, what takes precedence?
  • Opportunities: What can we recommend? Our products and competitor products. What can’t we recommend? Products to customers showing vulnerability markers. What trade-offs are acceptable? Lower engagement if it means better outcomes.
  • Numbers: What does success mean? Not conversion rates, but customer financial health metrics, trust scores, and regulatory compliance indicators. How do we measure whether we’re helping or exploiting?

This decision architecture would have constrained the AI:

  • Recommendations trigger human review for vulnerable customers
  • Product recommendations must include competitor alternatives when superior
  • Timing restrictions prevent recommendations during financial stress signals
  • Tone requirements prevent anxiety-driven engagement
  • Success metrics include long-term customer financial health, not just adoption

The AI would still personalise. It would still recommend. It would still optimise.

But it would optimise within boundaries that protect what matters.

The outcome would have been different:

  • Lower engagement initially (less anxiety = fewer app opens)
  • Lower product conversion (no stress-based timing, competitor options included)
  • But: higher trust scores, better customer outcomes, sustainable growth, regulatory confidence

The difference is not the AI capability. It’s the governance that shapes what the AI is allowed to do.

This is not a failure of AI. It is a failure of governance.

The technology does what it is designed to do. The organisation simply never designed the decision architecture that should constrain it.

Systems optimise for measurable outcomes while eroding unmeasured trust. Customers receive efficient experiences that feel manipulative. Automation scales, but so does resentment.

Every AI-driven CX failure follows this pattern:

  1. Deploy capability without decision architecture
  2. Optimise for simple metrics (engagement, conversion, efficiency)
  3. Achieve short-term performance gains
  4. Erode trust systematically
  5. Face regulatory, reputational, or competitive consequences
  6. Rebuild at far greater cost than doing it properly initially

The pattern is predictable because it’s structural, not technical.

The new CX operating logic

In an AI-augmented environment, customer experience now follows a new logic:

ICONIC defines intent and boundaries – the strategic frame within which decisions must operate.

Decision architecture governs machine authority – determining which decisions AI can take, under what conditions, with what constraints.

AI and automation execute at scale – making thousands or millions of decisions continuously, within the architecture designed for them.

Choice architecture translates decisions into experience – shaping how customers encounter, understand and act on the options created by AI.

Contribution evaluates long-term impact – measuring whether the system is building or eroding trust, value and strategic advantage over time.

Experience becomes emergent – it arises from the interaction between systems rather than being directly designed into each touchpoint.

Experience becomes continuous – it is shaped by ongoing optimisation rather than fixed design.

Experience becomes system-led – the quality of experience depends on the quality of the architecture governing the system, not on the quality of individual executions.

Leadership responsibility moves upstream to the decisions about decisions.

The governance challenge

What began as a decision problem now reveals itself as a governance problem.

The question is no longer just “what should we decide?” but “who decides, and how do we ensure those decisions remain aligned with intent as they scale and adapt?”

When humans made the decisions that shaped customer experience, accountability was direct. When machines make those decisions, governance must be explicit.

That governance lives in decision architecture. It is built through frameworks like ICONIC. And it is expressed in the constraints, incentives and boundaries that determine what AI is allowed to optimise for.

Let’s be clear

AI does not remove responsibility. It concentrates it.

Customer experience is no longer shaped primarily by designers or frontline teams. It is shaped by those who decide what machines are allowed to decide.

Decision architecture determines what is possible. Choice architecture determines how it feels.

ICONIC exists to ensure that as machines gain autonomy, human intent does not disappear.

Because in the end, the question is not how intelligent systems are.

The question is whether the decisions they make on our behalf reflect the decisions we would make ourselves – if we had the clarity, discipline and time to make them deliberately.

That is what decision architecture gives us: the clarity to govern what we would otherwise only react to.


Discover more from jam partnership

Subscribe to get the latest posts sent to your email.