You don’t need me telling you that there’s lot of noise around AI in advice right now.
FCA testing. Big firm involvement. Oversight, governance, all the right words.
Important, yes.
But most firms aren’t there yet.
They’re still trying to get comfortable with something much simpler…
Their own advice process.
And AI has a habit of exposing that pretty quickly.
The bit that doesn’t get said out loud
Ask a firm how their advice process works, you’ll get a solid answer.
Ask how consistent it is across advisers…
That’s where it gets a bit vague.
Because in reality:
- Two advisers can take completely different routes to the same outcome
- Two paraplanners can justify it in different ways
- Two QA reviewers can score it differently
And it still passes.
We’ve all seen it.
It’s just been… manageable.
Until now.
Why the FCA is leaning into “oversight”
This isn’t about the FCA suddenly getting excited about AI.
It’s about removing wiggle room.
If tech is helping produce advice faster, then firms should be able to:
- Show how decisions were made
- Demonstrate consistency
- Actually evidence suitability, not just describe it
That expectation didn’t start with AI.
Consumer Duty already set that direction.
AI just shines a light on where things don’t quite stack up.
Understanding the advice market: financial advice firms survey 2025
Where things start to slip
We’re seeing versions of this more and more.
An adviser uses AI to sense-check a recommendation.
It comes back well written. Clean. Logical.
They tweak it. Send it through.
Job done.
But when you slow it down:
- The client objective is a bit loose
- The risk discussion could apply to anyone
- The product justification sounds right, but doesn’t quite prove anything
The document looks better.
But the thinking underneath hasn’t really changed.
That’s the bit that matters.
Where most firms actually are
Not behind. Not ahead.
Just… in the middle.
- Trying bits of AI
- Not fully embedding it
- Slightly unsure how comfortable they should be
While still:
- Relying on QA at the end
- Accepting variation between advisers
- Fixing things after they’re written
That’s fine at a certain scale.
It gets harder when everything speeds up.
The shift that actually matters
The firms getting this right aren’t the ones shouting about AI.
They’re the ones tightening what sits underneath it.
Things like:
Making advice logic explicit Not “this feels suitable”, but why, in a way someone else would land in the same place.
Reducing interpretation Less room for “it depends who reviews it”.
Catching things earlier Not waiting until QA to find the gaps.
Taking consistency seriously Because that’s where most of the real risk sits.
This isn’t about being ahead
It’s about not drifting behind your own output.
AI speeds things up.
If your process doesn’t keep up, the gap between what you produce and what you can confidently stand behind gets wider.
Quietly at first.
Then very obvious.
A better question to ask
Instead of: – “Are we using AI properly?”
Try:
- Would we stand behind this advice if someone challenged it line by line?
- Can we explain the logic without leaning on how nicely it’s written?
- Would two different reviewers land in the same place?
If the answer is “depends”, that’s probably where the work is.
Bringing it back to reality
Most firms don’t need an AI strategy deck.
They need a tighter, more consistent advice process.
Because once that’s in place, AI helps.
Without it, it just makes things look better than they actually are.
If this feels familiar, don’t worry, you’re not the only one.
These conversations are happening quietly across a lot of firms right now.
If you’re seeing it too, we’re always up for a chat.
