This website uses cookies

Read our Privacy policy and Terms of use for more information.

In discussions I have been having with buy-side firms, one issue keeps resurfacing. AI in trading is often framed as a question of model choice. Claude, ChatGPT, Mistral, or something else. But that is not really the decision firms are struggling with.

The harder question is architectural. Who controls what the copilot is allowed to say? What evidence must it be grounded in? Where does inference run? What remains deterministic? And how much of the intelligence sits inside the platform rather than in a wrapper perched on top?

Those same questions have come up in building Avarria’s Blackstream. Once you move beyond the easy theatre of “add an LLM to the stack”, the real design work starts. The problem is no longer model access. It is control, boundary-setting, deployment doctrine and trust.

That leads to a conclusion that matters more than the model debate itself.

In capital markets, the model should be replaceable. The contract should not.

The wrong question

A lot of AI discussion still begins at the wrong layer.

Should the copilot use one built-in model? Should clients be allowed to connect their own? Should a platform standardise on a frontier model? Should it support self-hosted open-weight options?

Those are valid questions, but they are downstream questions.

The first question is simpler and more important: what is the product actually controlling?

If the answer is little more than prompt formatting and API routing, that is not much of a moat. It is a thin layer sitting on top of somebody else’s engine, hoping the underlying market does not move too quickly. In AI, that is not much of a business. It is more like camping on a motorway.

The real control point is not the model. It is the behavioural contract around the model.

What is the copilot allowed to use as evidence? What claims is it forbidden to originate? How are outputs structured? What must be explainable and replayable afterwards? What happens when confidence drops, retrieval fails, or the context is incomplete?

That is where the product value sits. Not in the fashionable model name of the month.

The real control point

From both buy-side conversations and working through the design of Blackstream, one point has become hard to ignore.

Firms do not just want a clever answer. They want an answer they can understand, govern and defend.

That sounds obvious, but it is where a lot of AI product thinking still goes wrong. Too much of the market still behaves as though the LLM is the product, when in reality the LLM is only one component inside the product. A volatile one at that.

What matters more is the contract wrapped around it: what context the model is allowed to see, how that context is retrieved and prioritised, how outputs are constrained, how factual drift is detected, how failures degrade safely, and how the whole thing is audited over time.

That contract is not an implementation detail. In a regulated environment, it is part of the control framework.

This is not just an internal architecture preference. In February 2026, ECB Banking Supervision said its approach to AI is technology-neutral and focused on how firms govern and control the risks created by the technology. In the same speech, Pedro Machado tied AI adoption to governance, sound risk management and compliance with an increasingly dense framework including DORA and the AI Act. That is exactly why the key design question is not which model sits underneath, but who controls the behavioural contract around it.

This is why I keep coming back to the same line: the model should be replaceable, but the contract should not.

Models will improve, pricing will change, providers will come and go, and deployment preferences will shift. A platform that hard-wires its value proposition to one provider is not really solving for trust. It is just taking a dependency and calling it strategy.

A better design is one where the model can change underneath a stable, platform-controlled behavioural contract.

Why this matters more in capital markets

This matters in any enterprise setting, but it matters more in capital markets because the surrounding systems are not optional. Determinism, traceability, resilience and auditability are still part of the job.

A capital markets copilot can help rank options, explain market context, surface relevant history, summarise protocol choices and highlight trade-offs. It can be extremely useful in the decision-support layer.

What it cannot be allowed to do is freestyle directly into the core transactional engine as though probabilistic reasoning and deterministic execution were the same thing.

They are not.

That boundary is one of the most important architecture decisions any serious platform has to make. AI can propose. It can explain. It can narrow choices. It can enrich workflows. But once you cross into permissions, state changes, trade booking, limits, controls or execution, the deterministic layer still has to govern what happens next.

This is not anti-AI. It is the opposite. It is how AI becomes usable without corroding trust in the core.

A surprising amount of the market still wants to skip over that distinction because “agentic” sounds more exciting than “controlled interface”. But in capital markets, controlled interface is the exciting bit. It is the difference between a credible architecture and an expensive hallucination with a roadmap.

Sovereignty is now an architecture and resilience question

One of the reasons this keeps coming up with buy-side firms is that sovereignty concerns are rising, but the conversation is often framed too narrowly.

Too many people reduce sovereignty to data residency, as if the whole question is solved once prompts and outputs stay in-region.

That is only part of it.

In practice, sovereignty also includes where inference runs, who controls the control plane, which legal jurisdiction applies, who can inspect logs, how client-specific context is isolated, whether embeddings and memory are partitioned properly, and whether the model can be swapped or self-hosted without rebuilding the product.

That is a very different discussion.

It is also one that now sits squarely inside regulation and supervision. DORA entered into force on 16 January 2023 and has applied since 17 January 2025. ESMA says it is designed to strengthen ICT security and operational resilience across the financial sector, including third-party ICT risk. The EU AI Act entered into force on 1 August 2024 and is broadly applicable from 2 August 2026, with prohibited practices and AI literacy obligations already applying from 2 February 2025, governance and GPAI obligations from 2 August 2025, and high-risk systems embedded into regulated products on a longer timetable.

So sovereignty is no longer just a procurement preference. It is becoming part of platform doctrine.

This is especially relevant in Europe, where firms are already thinking more carefully about concentration risk, third-party dependency, operational resilience and jurisdictional exposure. In that context, “trust us, the prompts stay in Europe” is better than nothing, but it is not the end of the conversation. The harder question is whether the platform gives the client meaningful control over deployment pattern, data boundaries, provider dependency and failure modes.

In working through Blackstream, that becomes obvious very quickly. Once you are dealing with trade context, client-specific memory, proprietary workflows and explainability, sovereignty is no longer a footnote. It becomes part of the architecture.

That is also why self-hosted and open-weight patterns deserve serious attention. Mistral’s documentation explicitly supports self-deployment of its open models and says multiple open-weight models are available under Apache 2.0, while its broader model catalogue spans both open-weight and commercial models. That does not magically remove the hard parts. Someone still has to size the hardware, certify the exact model build, manage performance drift and own the operational burden. But it does create a credible sovereignty path for firms that are not comfortable baking external inference dependency into sensitive workflows.

When the wrapper becomes the weakness

This is also where a lot of AI businesses have run into a wall.

Over the past two years, the market has been full of startups and pivots that were, in effect, thin wrappers around somebody else’s model capability. Some had nicer interfaces. Some added templates, orchestration or vertical language. Some were well-funded. But too many were still relying on the same basic assumption: that packaging alone would be enough to create durable value.

Then the foundation layer improved.

And improved again.

And six months later, some of those businesses looked much less like products and much more like temporary formatting around an API call.

We have already seen versions of this in the market. Microsoft hired Inflection AI’s co-founders and key staff into Microsoft AI in 2024, a reminder that value can migrate upwards into the hyperscaler layer very quickly. By early 2026, Sensor Tower data reported by TechCrunch showed OpenAI and DeepSeek together accounting for nearly half of global AI app downloads, while earlier ChatGPT-style competitors were crowded out. That same month, a Google Cloud startup lead openly warned that LLM wrappers and AI aggregators may not survive the next phase of the market.

The lesson is not that AI products are doomed. It is that thin layers are fragile when the underlying model market is moving faster than your moat.

Chegg is a public-market version of the same problem. In May 2025, Reuters reported that it would cut 22% of its workforce as subscribers fell 31% and revenue fell 30%, amid pressure from ChatGPT, Gemini, Anthropic and Google’s AI Overviews. Different sector, same brutal lesson: if generic AI capability starts to eat your value proposition from underneath, the market can turn on you very quickly.

This matters for capital markets platforms too. If the real intelligence layer ends up sitting outside the platform in an external wrapper, then the platform is at risk of becoming a system of record with shrinking strategic leverage.

That is not where a serious platform vendor wants to be.

What the architecture should actually look like

So what does a better design look like?

The answer is not a single fixed provider. Nor is it a free-for-all.

The right pattern is a provider-abstracted AI layer with a platform-controlled behavioural contract and a certified default.

In practice, that means something like three deployment options.

The first is an embedded default. This is the out-of-the-box path. The platform ships with a certified model choice that it has tested, evaluated and documented against its own contract. That gives clients a fast route to value and gives the platform a known baseline for quality, audit and support.

The second is a customer-contracted provider. In this model, the client points the platform at its own approved provider and commercial arrangement. That can make sense where the client already has governance, monitoring, commercial commitments or regional deployment preferences tied to a particular provider. The platform still controls the contract. The client controls the provider relationship.

The third is client-hosted open-weight or self-hosted deployment. This is the sovereignty-sensitive path. It is relevant for firms that want to keep inference inside their own estate, or that are not ready to accept external inference dependency for the use case in question.

The important point is that these are deployment patterns, not different products.

The contract remains the same. The grounding rules remain the same. The output constraints remain the same. The evaluation expectations remain the same. Provider choice can vary. Product control cannot.

That is also why “bring your own model” can work, but “bring your own prompt” should not.

A client may have legitimate reasons to prefer one model or deployment route over another. Fine. But if every client can rewrite the behavioural contract itself, the platform no longer owns the boundary that makes the product trustworthy. At that point it has stopped being a controlled capability and become a configurable source of future regret.

The real moat

his leads to a point worth stating plainly.

The moat is not that you plugged an LLM into your platform.

That moat has the lifespan of a supermarket avocado.

The real moat is in the combination of workflow integration, semantic context, deterministic interfaces, auditability, deployment flexibility and trusted behaviour under pressure.

That is what is harder to copy.

A firm can always switch model providers later. It can always upgrade from one generation to the next. It can even move from a managed provider to a self-hosted pattern if it really needs to. But if the product has not defined what stays under platform control, none of that flexibility helps very much.

The market will keep changing underneath it.

This is exactly why I think the best AI products in capital markets will not be the ones with the loudest model branding or the most excitable demo language. They will be the ones that know where AI stops, where deterministic systems take over, and which parts of the intelligence layer must remain platform-native to preserve trust, control and relevance.

Closing thought

The buy-side discussions I have been having, and the design questions that keep surfacing in Blackstream, point in the same direction.

The important decision is not simply which LLM to use.

It is whether the platform owns the contract around that LLM strongly enough to survive changes in provider, regulation, deployment posture and market expectations without losing its coherence.

That is the real architecture question.

In capital markets, the model is not the product.

The product is the control contract, the boundary around it, and the discipline to keep both intact while the model landscape keeps shifting underneath. That is where the market is heading, and it is also where supervisors and regulation are already pushing firms to become more disciplined.

Reply

Avatar

or to participate

Keep Reading