Blogs /
AI grid workloads: Insights from Nokia’s Marika Mentula
Industry

AI grid workloads: Insights from Nokia’s Marika Mentula

Learn how AI is changing network infrastructure and the evolving role of telecom operators.

Marika Mentula is positioning telecom at the center of the AI economy.

As Head of Network Infrastructure Sales for North Europe at Nokia, based in Helsinki, Marika operates at the intersection of AI, critical national infrastructure, and the future of telecom. But what truly distinguishes her isn’t her title. It’s her clarity. Marika has a rare ability to translate complex technical shifts into strategic consequences that leaders can act on.

As she puts it, “The network is no longer a passive pipe. It’s becoming the execution layer for AI.”

As AI moves from centralized clouds into distributed, real-time systems called AI grids, Marika sees what many are only beginning to grasp: latency, determinism, trust, and sovereignty are competitive and geopolitical differentiators. 

She can speak fluently about sub-30 millisecond thresholds, uplink-heavy traffic patterns, and policy-aware architectures, but always through the lens of outcomes like safety, accountability, resilience, and economic value.

In this candid and future-focused conversation, Marika describes the current moment as a renewal for the industry and a chance for networks to step into a far more strategic and consequential role.

Below is an edited version of our conversation.

What is fundamentally changing about how AI workloads interact with networks?

First of all, in the past networks were optimized to be throughput-first. In the future, because of AI, they will be optimized for latency, jitter, and determinism. 

Traditional networks were optimized for peak bandwidth, average latency, and best-effort delivery. Now AI-driven bi-directional traffic breaks that model. Why? Because there will be many inference loops, agent coordination, and control systems that care more about predictable latency than raw megabits per second.

Networks are becoming much more like active systems, not just passive pipes as they used to be. When traffic is interactive and uplink-heavy, the network can’t just forward packets blindly.

So what is the shift? It moves from a traditional transport fabric into a performance orchestration layer. What does that mean? Real-time awareness of application intent — whether the application is designed for inference, training, or control — and dynamic path selection based on latency budget, not hop count (the number of intermediate network devices a packet traverses between source and destination).

The result is that the network actually participates in the AI workload. Decisions move from the endpoints into the fabric.

There will also be a massive rise in east-west and uplink traffic. AI agents don’t behave like video streams. 

Why is sub-30ms becoming an important latency threshold?

This is super critical. When latency becomes unpredictable, not just high, but variable, whole classes of AI interactions stop working. 

The core issue is that humans and machines are latency detectors. Humans notice jitter before they notice delay. Control systems become unstable under variable feedback. And safety systems assume bounded response times. So anything that depends on tight feedback loops will break first.

Think about conversational AI that feels present. What works today is turn-based chat, async voice assistants, ask-wait-answer flows. Humans expect about 150 to 250 milliseconds round trip for conversational flow. If you break that rhythm of turn-taking, the system feels distracted or fake.

Physical AI and embodied agents require control loops with predictable response times, not just fast averages. Once latency spikes, robots hesitate. Machines may not only stop but also overcorrect. And at scale, a centralized cloud then becomes a liability, not an accelerator.

Why are Telcos well positioned as AI moves to distributed, real-time systems?

Telcos have built-in trust, identity and regulatory standing, which is wildly underappreciated today. Telcos already verify identities at the network layer. They meet lawful intercept, emergency and safety requirements. They operate under national sovereignty regimes. 

Telcos enable AI with a verifiable agent identity with jurisdiction-aware inference and compliance by construction, instead of just an add-on. And what hyperscalers typically have to do is that they have to negotiate trust market-by-market. 

Sovereignty is becoming a crucial feature and not a constraint. Because AI touches critical infrastructure in healthcare, defence, industrial systems, etc…. All these governments will require the data to reside locally. They require predictable control paths and auditable behaviour and telcos are naturally and structurally aligned with this.

Why do networks matter so much in delivering sovereign AI?

Sovereignty and trust are really becoming central to the AI deployment. Keeping AI inside national networks doesn't just change deployment architecture, it actually changes who is accountable, who has leverage, and what trust even means. Governments will likely stop asking where data resides and instead be asking who enforces the rules. That's a completely new type of discussion. 

As the liability and accountability are becoming clear, enterprises will stop thinking about this kind of vendor risk and instead start thinking about infrastructure class. For regulated industries like energy, rail, healthcare, defence and finance, the policy discussion becomes programmable at the network layer. 

The sovereign AI grids actually change the resilience narrative. And of course, for governments, this is existential because of the possible cyber conflicts, any national disasters, and supply chain disruptions. 

In this scenario, Nokia is not trying to be a hyperscaler cloud or consumer AI brand. We are positioned to be the sovereign execution layer and trusted vendor in this market.

Looking ahead, what happens if operators don’t adapt their networks for AI-native workloads?

Operators need to stop optimizing for traffic and start optimizing for outcomes now, not somewhere post 5G or once the AI demand is clear. 

Here's what Telcos should be doing in the next 12 to 36 months if they really want to join the AI value stream. 

They should consider carving up the network into performance products like Nokia’s service performance metrics like ActNow, Turn Latency, Jitter, Uplink into sellable products. Telcos should be thinking of building slices with real admission control as steady, reliable performance matters more than just bigger bandwidth numbers.

In addition, they should think of making the networks policy aware, not just traffic aware. Build on the capabilities in enforcing data locality and jurisdiction by default. 

And then decide where to compete and where not to compete with the hyperscalers. I think this is pretty critical. I think operators should aim to integrate cleanly with the cloud and be the execution and inform enforcement layer. The cloud really can't own the last mile SLA and liability boundary. 

And then I think the pricing should be aligned with AI economics and not just the legacy ARPU thinking because the AI value is really outcome driven. Prepare for usage tied into inference events, pricing on SLA compliance, not just the megabits per second, possibly even revenue sharing with certain kinds of application and model providers. 

They should train the organization to sell the infrastructure trust, not just speed. And that's a cultural shift. The sales, product, and regulatory teams need to speak fluently about sovereignty, the determinism, liability, certification, and also safety envelopes. 

The forward-leaning operators should position themselves as the place where AI is allowed to act in the real world. Not faster, not cheaper, but accountable. Then partner up with Nscale. That would be my advice.

Blog Contents

Astrid Sandoval

Director of Content, Nscale

Astrid leads Nscale's thought leadership and storytelling engine. She works with Nscale experts, partners, and customers to tell the stories about AI and why it matters.

Explore More

From bits to tokens: The inference opportunity for Telcos

Bare-metal performance without the complexity

What is the AI Grid?

ABI Research ranks Nscale as #1 neocloud

Access thousands of GPUs tailored to your needs

Reserve GPUs