This is not a theoretical view. It comes from years of building and buying trading infrastructure across different environments, and from more recent discussions with firms while shaping Avarrai’s Blackstream.
Across those settings, one pattern comes up repeatedly. Buy-side execution stacks tend to evolve into overlapping layers of OMS, EMS, risk, compliance and data services, with duplicated logic, unclear boundaries and too much intelligence trapped inside applications that were never meant to be the strategic centre of gravity. At the same time, upstream portfolio intent, mandate context and exposure views are often pulled into execution through inconsistent interfaces and duplicated logic.
For a while, firms could live with that. The primary objective was digitisation. Get the OMS in. Add the EMS. Connect venues. Bolt on controls. Push market data into the screens traders use most. Make it work well enough.
That was survivable.
It is no longer enough.
A modern buy-side execution stack has to do more than move orders around. It has to support better decision-making, cleaner controls, lower operating complexity, stronger interoperability and a credible foundation for AI-enabled workflow. It has to do that across asset classes, across channels and across a technology estate that is usually more inherited than designed.
Trading is not just a downstream utility in that picture. It is part of how investment intent gets translated into real-world outcomes.
That is where the old model starts to break down.
Because the problem is not simply that firms have too many systems. It is that the boundaries between those systems are often wrong. OMS and EMS overlap in workflow. Risk and compliance sit partly inside platforms and partly outside them. Strategic data is copied into multiple applications. Controls are duplicated. Traders see different slices of the same problem through different tools, each with its own assumptions and support burden.
The result is not a modern stack. It is a negotiated truce between applications.
The wrong starting point
When firms talk about modernisation, the conversation often starts in the wrong place.
Should we replace the OMS?
Do we need a better EMS?
Can we consolidate vendors?
Can we buy a front end that makes the problem look less ugly?
Those are understandable questions. They are also incomplete.
The better question is this:
What should the modern buy-side execution stack actually look like if you were designing it deliberately?
Not as a procurement exercise.
Not as a beauty contest between vendor demos.
As an operating model.
Once you ask it that way, the answer becomes clearer. The modern stack should not be designed as a pile of overlapping applications with duplicated intelligence. It should be designed as an execution fabric connected to a shared data and intelligence layer, with clear separation between systems of record, systems of decisioning, systems of control and systems of interaction.
By execution fabric, I mean the shared layer that assembles context, orchestrates workflow, applies controls and connects the firm to execution channels without letting any one application become the strategic centre of gravity.
That is the target state.
What the modern stack should look like
A good modern stack is not one giant platform that claims to do everything. Nor is it a patchwork of products that each do a bit of everything and overlap in all the wrong places.
It is better understood as a small number of logical layers with clear responsibilities.
1. System of record
This is where authoritative execution state lives. For most firms, that means OMS, asset master, instrument and entity reference data, and the services that hold authoritative order lifecycle state, allocations and approvals.
The principle is simple, record state once, publish it everywhere.
Portfolio intent, mandates, holdings and exposure context may originate upstream, but they should enter the execution fabric through clear interfaces rather than being blurred into it.
If downstream tools each maintain their own slightly different version of the truth, the architecture is already wobbling.
2. Data and intelligence layer
This is where strategic market and execution data should be shaped. Not in the OMS. Not in the EMS.
This layer handles ingestion, normalisation, canonical modelling, enrichment, time alignment, entity mapping, feature creation, signal extraction, scenario detection and analytics. It turns raw data into reusable context for workflow, controls and AI.
This is where the stack becomes capable of learning and interpreting rather than merely displaying.
3. Orchestration and decisioning layer
This is the execution brain. It is where routing logic, RFQ orchestration, protocol choice, sequencing, workflow triggers, recommendations, automation policies and decision-support services should live.
Decisioning is not the same thing as interaction. A screen is not a brain, however attached parts of the market still seem to be to that theory.
The principle is separate decisioning from display.
4. Control layer
This is where risk, compliance and governance should be composed properly. That includes pre- and post-trade checks, policy rules, mandate controls, approvals, audit, surveillance hooks, exception handling, entitlements and model governance.
This does not mean every control must run in one place for ideological purity. Some will still execute inside core platforms or gateways for practical reasons. The real point is clear ownership, consistent composition and auditable sequencing.
5. Interaction layer
This is what users actually see. Trader desktops, OMS and EMS screens, dashboards, alerts, workflow panels, analytics views, chat surfaces and, where useful, natural-language interaction.
This layer matters. Traders are not going to disappear into a cloud of architecture diagrams.
But the rule should be UIs consume intelligence. They do not own it.
6. External connectivity layer
This is how the stack reaches outside the firm. Venues, dealers, algos, brokers, post-trade services, reporting destinations and third-party services sit here.
The strategic principle is straightforward. External participants should be treated as endpoints as far as the firm’s architecture allows, rather than being allowed to define the firm’s internal workflow model.
That distinction matters.
The build blueprint
If a firm were designing this target state deliberately, what should it ask?
What are the authoritative systems of record for order state, allocations, approvals and execution context? If that is vague, duplication starts almost immediately.
How does upstream investment intent enter the execution stack? Where do mandates, restrictions, exposures and portfolio context cross the boundary? If that interface is unclear, the stack will either duplicate upstream logic or force traders to reconstruct it manually.
Where does workflow initiation live, where does orchestration live, where does execution decisioning live, and where does exception handling live? If the answer is “partly in the OMS, partly in the EMS, with some special cases elsewhere”, that is a smell, not a strategy.
Where is market and execution data shaped? Where does canonical modelling happen? Where are features and signals created? If the answer is “inside whichever application got there first”, the firm is already giving away future value.
How are controls composed, maintained, sequenced and audited? Central control does not mean one giant rules graveyard. It means clear ownership and reusable services.
What is the split between human-facing and machine-facing outputs? What needs to be shown, what needs to be consumed programmatically, what becomes record, and what is derived for decisioning? That distinction matters much more in an AI-enabled stack than in the old “put it on the screen and hope” model.
What genuinely needs to be modular? Not everything should be modularised for sport. But anything strategic should be designed for portability, reuse and replaceability.
Where does AI actually fit? Not “where do we add an assistant?” That is the lazy version. The real questions are where is context assembled, how is retrieval grounded, which tools can models call, where are feedback loops captured, how are outputs ranked and governed, and which decisions are assistive, recommendatory or genuinely automatable?
If those questions are not answered in the architecture, the AI strategy is probably just another interface project. The constraint is rarely the model itself. It is context, retrieval, tooling, governance and feedback.
What good looks like
A good modern stack should look boring in the right way.
OMS and related services hold authoritative execution state. Upstream investment systems provide portfolio intent, mandate and exposure context through controlled interfaces. Strategic data flows into a shared data and intelligence layer. Execution decisioning sits in a separate orchestration fabric. Controls are reusable and consistently composed. User interfaces are modular consumers of shared intelligence. Venue, dealer and algo integrations are abstracted cleanly. AI sits inside the decisioning flow, not floating above fragmented applications like a motivational poster.
That is what good looks like.
Not necessarily simple.
But clear.
And clarity is worth more than people think.
Most firms will not get there by ripping everything out
This is where a lot of architecture commentary drifts into fiction.
Most firms are not greenfield. They already have an OMS and often an EMS. They have embedded workflows, integration history, vendor dependencies and enough scar tissue to be suspicious of anyone using the phrase transformation journey.
So the path to the target state is usually hybrid.
That is not a weakness. It is reality.
In practice, there are four sensible moves.
One is to keep the existing core systems and build the shared data and intelligence layer around them. That is often the best first step because it reduces duplication without forcing immediate platform replacement.
Another is to introduce an orchestration layer between OMS and EMS where workflow overlap is particularly painful. That creates a control tower for routing, RFQ management, sequencing, decision-support and automation policies without letting either platform own too much by default.
A third is to centralise controls outside the applications wherever practical. Risk, compliance, approvals, audit hooks and exception logic can often be pulled into shared services even when the core platforms remain.
The fourth, and usually smartest long-term path, is to build the intelligence and orchestration layers first, then progressively thin out duplicated logic in legacy systems over time. That is far more credible than trying to solve everything in one heroic programme that will almost certainly turn into an expensive group therapy exercise.
Build, buy or hybrid
This is not simply about whether to code or procure. It is about what the firm wants to own.
A build-led approach makes sense where the firm wants control over data modelling, orchestration, intelligence creation, control logic and AI enablement.
A buy-led approach makes sense where speed, implementation simplicity and vendor support matter more than strategic differentiation.
For most serious firms, hybrid will be the real answer. Buy the commodity where it is genuinely commodity. Build, or retain control of, the layers that shape intelligence, orchestrate workflow, compose controls and enable AI.
Hybrid is not indecision. It is deliberate separation between what should be rented and what should be owned.
The real payoff
This target state is not cleaner architecture for architecture’s sake.
It should lower operational complexity because duplication falls and boundaries become clearer. It should improve execution outcomes because orchestration and decisioning become more coherent. It should strengthen control because policy, audit and exception handling become more consistent. It should improve reuse because data, intelligence and controls can serve more than one application or workflow. It should improve resilience because fewer hidden dependencies and fewer overlapping functions mean less fragility. And it should create a credible foundation for AI because context, features, governance and feedback loops are designed in rather than bolted on afterwards.
The modern buy-side stack should not be a negotiated truce between overlapping OMS, EMS, risk and compliance tools. It should be a deliberately designed execution fabric connected to a shared data and intelligence layer, with clear boundaries between record, decisioning, control and interaction.
Most firms will not get there by ripping everything out. They will get there by building the right shared layers around what they already have, and by being far more disciplined about what each part of the stack is actually for.
Not more applications. Not more overlap. Not another round of expensive ambiguity dressed up as modernisation.
A proper execution fabric.

