Frictionless connectivity enables seamless AI workloads

Artificial intelligence (AI) workloads are growing rapidly in scale and complexity, driving a shift towards distributed AI architectures. Instead of running entirely in one location, AI computations are now spread across cloud data centres, edge servers and devices. This distribution helps optimise performance, latency and resource usage. Telecommunications networks play a pivotal role in this paradigm – they provide the “glue” that connects these distributed AI components. 

In fact, modern AI applications are evolving into multi-tier inference pipelines spanning from device-level processing to edge computing sites to the cloud, with telco networks serving as the backbone for reliable, low-latency data exchange. Frictionless connectivity in the form of high-bandwidth, low-latency, seamlessly managed networks enables distributed AI to function smoothly. 

As seen in Figure 1 presented below, the global demand for power by data centres to support AI is projected to more than triple by 2030, reaching at least 170 gigawatts (GW). This exponential growth underscores the need for robust connectivity between distributed compute sites. Massive AI models (such as those for generative AI) require scaling out across many servers and locations, and the surge in AI adoption is driving an expansion of both cloud and edge data centres. Simply building more data centres is not enough – they must be interconnected with high-speed networks to handle the immense data flows and real-time processing requirements of AI. In many cases, optical fibre links are needed to connect new AI data centres, and telcos have an opportunity to provide this critical infrastructure at scale. In short, as AI compute capacity expands, equally advanced connectivity is essential to make distributed AI viable.

Figure 1: Global power demand for data centre capacity (in gigawatts) to support AI is projected to more than triple by 2030, reaching at least ~170 GW.

Distributed AI at the edge

Distributed AI refers to techniques where AI workloads (training or inference tasks) are split across multiple computing nodes, for example, a cloud server might handle part of a neural network’s computation while an edge server or device handles another part. The rise of edge computing (processing data closer to its source) has made distributed AI more practical. By running AI models on edge servers (such as telco multi-access edge computing (MEC) nodes at cell towers or central offices) or even on user devices, one can reduce the distance data must travel. This yields several benefits: faster response times, less bandwidth usage and improved data privacy. Industry experts note that instead of sending all raw data to a central cloud, collaborative AI inference can occur at multiple tiers – device, “far edge” (on-premises or user device), “near edge” (telco MEC) and cloud – all working in concert. In such a multi-tier setup, the edge is no longer just a passive conduit but becomes an intelligent layer that can make decisions and pre-process data, with telcos acting as AI compute hubs hosting these edge workloads.

High-speed fibre connects cloud data centres with edge computing nodes, while 5G wireless links connect edge nodes to end devices. Frictionless connectivity between these tiers is crucial so that AI models can be partitioned and run seamlessly across the network. In this architecture, for example, an AI system might analyse data from a smart device locally, send intermediate results to an edge server for aggregation or refining, and only transmit necessary insights to a cloud service. By processing data at the source and intermediate points, Edge AI avoids unnecessary backhaul of raw data – “instead of bringing data to AI, it brings AI to the data,” as one telecoms-focused report puts it. This approach can streamline processes and unlock new possibilities, from real-time video analytics to autonomous vehicles, by using compute at the optimal location for each task.

For telecommunications operators, deploying AI at the network edge provides multiple advantages, both technical and strategic. A GSMA industry briefing highlights several key attributes that distributed edge inference brings to telcos’ value proposition:

  • Reduced latency: Running inference closer to the end user (on edge servers or devices) cuts round-trip delay and speeds up AI responses. This is critical for time-sensitive use cases like autonomous drones or augmented reality, where milliseconds matter. For instance, an AI model can detect anomalies or make decisions locally without the delay of cloud communication.
  • Data sovereignty and security: Keeping data and computations on local networks or on-premise edge nodes helps ensure sensitive data is not continuously sent off-site. Industries with strict data regulations (healthcare, finance, government) can process data within country or enterprise boundaries, using telco-provided edge clouds to maintain compliance.
  • Deterministic performance: With dedicated edge resources and localised processing, service quality can be more consistent and predictable (deterministic) for critical applications. Unlike best-effort public cloud paths, a managed edge can guarantee bandwidth and jitter characteristics for AI workloads, which is important for reliable automation (such as in industrial control systems).
  • Cost savings: Distributing inference can lower costs by reducing cloud compute hours and bandwidth usage. Rather than continuously streaming high-volume raw data to a central cloud (and incurring cloud egress fees), companies can filter and analyse data on-site. Telcos also cite potential savings by offloading some processing from expensive cloud GPUs to more cost-effective edge infrastructure.

These benefits explain why telcos and enterprises are keen on edge AI. By bringing compute closer to data, organisations can achieve faster insights and alleviate strain on networks and cloud systems. However, to fully realise these gains, the connectivity between the distributed AI components must be extremely robust – essentially frictionless. 

Frictionless connectivity

For distributed AI to work effectively, all the participating nodes (devices, edge servers, cloud) need to communicate as if they were part of one cohesive system. Any lag, bandwidth bottleneck or unreliability in the network link becomes “friction” that can slow down or disrupt AI workloads. Thus, frictionless connectivity, characterised by ultra-low latency, high throughput, reliability and dynamic flexibility, is the key enabler for seamless processing of AI workloads across locations.

On the wired infrastructure side, one major requirement is high-capacity fibre connectivity to link data centres and edge sites. AI workloads (especially training of models or handling of large datasets) involve moving enormous amounts of data. Telcos have long provided fibre backbones for internet traffic, and now this fibre is crucial for interconnecting the new breed of AI compute hubs. Analysts note that connecting data centres via fibre is essential to scale AI workloads for customers. Many cloud providers (hyperscalers) are expanding their data centre footprints, including in new geographic areas, and they often rely on telcos to provide fibre routes between these facilities. In fact, in regions where regulatory barriers prevent hyperscalers from laying fibre themselves, telcos have an advantage and a clear opportunity to supply this connectivity. For example, Verizon has struck agreements with companies like Google Cloud and Meta to provide network infrastructure for their AI workloads, effectively leasing out its high-speed fibre capacity to support hyperscalers’ distributed AI needs. This illustrates how telcos’ networks form the backbone of the AI economy- without fast links, even the most advanced AI data centres would remain siloed and underutilised.

Beyond raw bandwidth, intelligent networking services are emerging as another facet of frictionless connectivity. Because AI applications can have dynamic and specialised network requirements (for example, a burst of traffic when aggregating edge sensor data, or strict latency limits for an inference query), traditional static networks may not suffice. Telcos are developing more adaptive, software-defined networks to give enterprises greater control and automation in how AI traffic is routed. For instance, network slicing in 5G allows carving out a dedicated slice of the mobile network with guaranteed bandwidth and latency for a particular AI application. An AWS telecommunications blog describes how 5G network slices can be used to provide ultra-reliable low-latency communication (URLLC) with sub-10ms latency for safety-critical AI tasks (such as smart city intersection monitoring). These slices isolate and optimise network resources for the AI workload’s needs – essentially removing the friction of contention or unpredictability on the public network.

Telcos are also introducing on-demand connectivity platforms that make it easier to spin up and manage network links between distributed AI components. A notable example is Lumen’s ExaSwitch service, which gives customers a self-service portal to connect their edge sites, data centres, and cloud onramps and to dynamically route traffic as needed. This kind of service uses software-defined networking to let an enterprise (or an AI application itself) request additional network capacity or re-route traffic in real time. Such flexibility is particularly useful as AI workloads scale- for instance, if a machine learning model training job suddenly needs to utilise extra cloud GPUs in another region, the network can respond by provisioning a high-bandwidth path on the fly. Intelligent network services can also help control costs by routing data in ways that minimise cloud egress fees or comply with data residency requirements. In short, the trend is toward “adaptive connectivity”- networks that autonomously configure themselves to meet the specific demands of AI workloads, thereby minimising human intervention and friction. This evolution is still in early stages, and telcos are experimenting with different approaches, such as software platforms and partnerships with cloud providers, to deliver these advanced networking capabilities. But the direction is clear: seamless AI processing will require networks that are as agile and intelligent as the AI applications they serve.

As AI systems continue to scale across devices, edge sites and cloud platforms, the networks that bind these layers together become just as critical as the compute itself. Frictionless connectivity – anchored in high-capacity fibre, intelligent software-defined networking and low-latency 5G – ensures that distributed AI behaves as a unified, responsive whole. For telecommunications operators, this represents both a technical imperative and a strategic opening: by providing the adaptive, high-performance connectivity that modern AI workloads demand, telcos can establish themselves as indispensable partners in the AI economy. Ultimately, the future of AI will not be defined by compute alone, but by the seamless integration of compute and connectivity that enables AI to run everywhere, all the time.

Marion Webber

Collaborator