Distributed artificial intelligence (DAI) relocates model inference and data processing from centralised clouds to distributed edges – on premises, inside metro mobile-edge sites and sometimes on devices – so applications can meet strict service level objectives for latency, jitter and availability.
This architectural shift elevates connectivity from a best-effort cost centre to a mission-critical substrate whose performance and resilience directly shape AI outcomes, as evidenced by operator programmes including Verizon’s collaboration with AWS Wavelength, AT&T’s satellite video calls with AST SpaceMobile, Eutelsat OneWeb’s 5G non-terrestrial network (NTN) trials, Telefónica’s Open Gateway “Quality on Demand,” DOCOMO’s slice-level QoS/QoE work and Telstra’s national LEO backhaul rollout.
Why Distributed AI changes the connectivity calculus
DAI workloads are unusually sensitive not only to absolute latency but to variance, which degrades robotics control loops, real-time video analytics and interactive perception tasks; standardisation work in 3GPP Release-18 responds by strengthening AI and machine learning (ML) support in NG-RAN as well as SON/MDT and QoE features that reduce such variance through better observability and control.
These Release-18 capabilities supply the data and control hooks that closed-loop automation needs to anticipate congestion, right-size resources and remediate impairments before they affect application-level service objectives.
To make these requirements operational, the connectivity problem can be organised into five mutually reinforcing pillars that are both standards-aligned and field-proven by leading operators.
Pillar 1: Path diversity, including a sky path
A resilient DAI connectivity posture begins with truly independent last-mile options – typically a combination of 5G standalone (public and/or private) with fixed access such as fibre or enterprise Wi-Fi – so that critical flows can survive local faults and maintenance windows, and it increasingly adds a non-terrestrial augmentation using low earth orbit (LEO) constellations for additional continuity.
In February 2025, AT&T and AST SpaceMobile completed a video call carried over AT&T spectrum via AST’s BlueBird satellites, demonstrating a direct-to-device augmentation path relevant to field operations and public safety.
In the same week, Eutelsat Group (OneWeb), Airbus and MediaTek announced the first 5G NTN connection from space over a commercial LEO fleet, marking a milestone for standardised satellite-terrestrial interoperability in 5G systems.
On the backhaul side, Telstra and Eutelsat OneWeb began what they describe as the world’s largest LEO backhaul deployment across Australia to reinforce coverage and maintain service continuity in sparsely fibered regions.
Pillar 2: Deterministic experience via programmable quality
DAI traffic is bursty and session-critical, which implies that networks must expose programmable quality so applications can request and later release elevated treatment during inference spikes without renegotiating sessions.
Telefónica’s Open Gateway implementation of the CAMARA “Quality on Demand” (QoD) API exemplifies this trend by allowing developers to set session-level quality parameters, and a production case with Cinfo reported fewer freezes and higher resolution during AI-assisted live video when QoD was engaged.
The CAMARA consortium’s Spring 2025 meta-release broadened and stabilised a large set of open network APIs, advancing multi-operator consistency in how applications “ask the network” for resources across markets.
Pillar 3: Locality – placing inference inside the network
Locating model serving as close as possible to packet sources makes latency budgets more predictable and reduces exposure to internet hairpins; Verizon’s 5G Edge with AWS Wavelength provides compute zones inside the mobile network so latency-sensitive components can be anchored in-network while still consuming regional cloud services as needed.
This in-network placement has repeatedly demonstrated ultra-low access latency for suitable workloads in U.S. metros and is a canonical design for vision, robotics and interactive analytics that cannot tolerate jitter.
Pillar 4: Session continuity across accesses (ATSSS)
For DAI, multiple access paths are necessary but insufficient unless flows can move seamlessly between them; 3GPP’s Access Traffic Steering, Switching and Splitting (ATSSS), specified at Stage 3 in TS 24.193 with system architecture in TS 23.501, enables per-flow steering between 3GPP and non-3GPP access and even allows flow splitting when appropriate.
In practice, ATSSS means live inference streams can continue without session resets when a 5G leg degrades and traffic must move to Wi-Fi or vice versa, or when a link fails and sub-second failover is required.
Pillar 5: Autonomous observability and self-healing
As DAI usage scales, manual interventions are too slow to preserve application-level SLOs; Release-18’s AI/ML features for NG-RAN and related management enhancements provide the telemetry and policy points that closed-loop automation requires to forecast congestion, right-size slices and reroute flows before users notice.
NTT DOCOMO’s demonstrations of slice-level QoS/QoE scoring in a 5G standalone environment indicate how operators can expose assurance that is intelligible to application owners and enforceable by policy.
Evidence from leading operators
Public MEC efforts by Verizon and AWS show that in-network edge locations can be productised across many metros, allowing enterprises to bind latency-critical components to the radio edge while integrating with regional cloud footprints.
Vodafone and Microsoft’s ten-year partnership signals that operators are pairing GenAI platforms and modern cloud underlays with programmable connectivity to accelerate customer experience, IoT and internal operations, a strategic alignment likely to matter as Open Gateway-style APIs spread across markets.
Telefónica’s QoD deployments, AT&T’s direct-to-cell satellite calls, Eutelsat OneWeb’s 5G NTN trial, and Telstra’s LEO backhaul programme together suggest a near-term path to combining terrestrial 5G, satellite augmentations and standardised APIs into coherent offers that enterprise developers can reliably target.
Implications for evaluation and procurement
An evaluation framework for DAI-grade connectivity should confirm that access diversity is real (independent failure domains), measured (documented failover time on critical flows), and programmable (policy-driven use of ATSSS to steer, switch, or split traffic as conditions evolve).
It should verify that quality is consumable as an API by checking Open Gateway/CAMARA QoD availability in relevant markets and by requiring evidence that application sessions can raise and relax quality in-flight without disruption.
It should also make locality and assurance explicit in offers by asking operators to publish MEC presence and latency demarcations (device toedge and edge to region) and to provide slice-aware QoS/QoE dashboards with export into customer observability stacks; where appropriate, satellite augmentation plans (spectrum, handover behaviour and policy for prioritised AI flows) should be included.
A pragmatic path to first proof
Organisations new to DAI at the edge can achieve meaningful proof in a quarter by selecting two lighthouse use cases with explicit latency and jitter SLOs; deploying a pilot that combines in-network MEC with QoD in two or three cities; enabling ATSSS and slice-level monitoring so that in-session quality requests and path steering can be exercised under load; and running structured failure-mode drills – fibre cuts, RAN congestion and power events – while demonstrating continuity via alternate access and, where available, satellite NTN.
Finally, teams should codify API playbooks that specify when to request and release QoD and automate these actions in CI/CD pipelines so the behaviour is repeatable across releases.
As operators lean into distributed AI, the connectivity choices that matter most are the quiet ones – independent paths that fail over without fuss, in-network edges that shorten control loops, policies that elevate or relax quality on demand and standards that keep it all interoperable. When composed carefully, these capabilities recede into the background, and what remains is the perception of dependable intelligence: services that evolve at their own pace, customer experiences that remain steady even in adverse conditions and a network that adapts as naturally as the models it serves.