As artificial intelligence becomes increasingly distributed across cloud, edge and network environments, the role of connectivity is being fundamentally redefined. Telecoms operators are no longer just carriers of data; they are emerging as critical enablers of AI infrastructure, uniquely positioned to combine high-performance networks with localised computing power. Telcos are evolving to support distributed AI through edge computing, GPU-as-a-service and AI-enabled networks. At the same time, they’re engaging in real-world partnerships, emerging business models and addressing the strategic challenges. Together, these developments illustrate how frictionless connectivity is becoming the foundation on which the next generation of AI services will be built.
The telco’s role
Telecoms operators recognise that they can be more than just connectivity providers in the AI era – they can become key enablers of AI infrastructure. With their widespread networks and real estate, telcos are in a unique position to host distributed compute and offer services that support AI workloads. This is leading to new strategies where telcos provide not only the network pipes, but also computing power at the edge and even AI platforms. A 2025 McKinsey report identified multiple avenues for telcos in the AI value chain, from building fibre connectivity for cloud players to offering GPU-based computing services.
One immediate area of focus is edge computing deployments in partnership with cloud and AI companies. Telcos have been rolling out MEC (edge cloud) nodes that reside on their network premises, often in collaboration with hyperscalers (like AWS, Microsoft and Google) or infrastructure providers. These edge nodes bring cloud capabilities into the telecoms network, allowing AI applications to run locally and take advantage of low latency to the end user.
For example, Verizon’s Mobile Edge Compute partnership with AWS is an oft-cited case: Verizon provides the physical space, power and network integration at 5G edge sites, while AWS brings in its compute and storage hardware and cloud management platform. This hybrid model lets enterprise customers deploy applications that need real-time AI inferencing – such as video analytics, VR/AR or industrial automation – on AWS Outposts located within Verizon’s network footprint.
Verizon’s role isn’t just passive; it actively ensures the AI workloads run optimally over its 5G network. By colocating compute with connectivity, such arrangements eliminate transit delays (data doesn’t have to travel to a distant cloud) and keep sensitive data on the local network. Other telcos have pursued similar MEC tie-ups: for instance, AT&T with Microsoft Azure Edge Zones, and Telefónica with Google Cloud’s Distributed Edge, to name a few. The common theme is that telcos provide the frictionless connectivity and local hosting, while cloud partners provide the AI software stack- together enabling new services for enterprises.
Telcos are also starting to offer AI compute as a service directly. Rather than ceding all AI cloud business to the hyperscalers, some operators are using their infrastructure to launch GPU-powered services for their customers. These offerings, sometimes called GPU-as-a-Service (GPUaaS) or AI-as-a-service, allow companies to rent access to GPUs hosted in telco data centres or edge sites, typically on a pay-as-you-go model. For example, Singtel (Singapore) announced in 2024 that it will introduce a GPU-as-a-service platform powered by NVIDIA’s accelerated computing, letting local businesses run AI workloads with guaranteed data residency in-country. Similarly, Japan’s SoftBank and Indonesia’s Indosat Ooredoo have launched AI compute services in collaboration with NVIDIA, aiming to position themselves as domestic hubs for AI processing.
In these models, telcos are effectively building mini-“AI clouds” utilising their existing facilities and spectrum assets. They not only provide the servers and GPUs but also the high-speed connectivity between those resources and the end users. The business rationale is clear- global demand for cloud AI compute is booming, and telcos see an opportunity to capture a share of that market by offering alternatives focused on privacy, locality or integrated network benefits. Industry projections suggest that the addressable market for telco-delivered GPU cloud services could reach US$35–70 billion globally by 2030, especially driven by regions where data sovereignty is critical or where hyperscaler presence is limited.
Early adopters like Verizon have opted to partner with established GPU cloud providers (Verizon in fact unveiled an AI strategy in early 2025 that includes partnering with specialist AI compute firms), while others like Telenor and Swisscom in Europe are building sovereign AI cloud platforms to serve government and enterprise needs locally. Telenor, for instance, announced a collaboration with NVIDIA to support its AI-first ambitions and is launching a regional AI cloud (dubbed an AI Factory) to ensure Nordic businesses can access advanced AI compute within national borders. These moves illustrate telcos’ evolution into AI infrastructure partners – providing both the connectivity and the computing environment tailored to AI applications.
Another frontier where telecoms and AI intersect is within the network itself. Operators are exploring how to imbue their network operations with AI or even share network hardware for AI processing. A notable concept here is AI in the radio access network (RAN). In early 2024, several telecoms and tech players formed the AI-RAN Alliance to investigate replacing traditional baseband units at cell towers with AI-enabled, GPU-based unit. The idea is that the same distributed compute installed to handle RAN functions (like signal processing for 5G) could simultaneously run AI workloads at the cell site. In preliminary tests, using GPU-based baseband hardware improved network performance and efficiency while also opening the door to host AI inference tasks right at the network edge.
Operators such as SoftBank (Japan) and T-Mobile (USA) have begun trials to complement their centralised data centres with distributed GPUs across their mobile network. In practice, this could mean that a telecoms site in a city could serve as both a 5G base station and a micro AI data centre, offering ultra-low latency compute to nearby users or devices. If successful, this architecture would effectively blur the lines between communication infrastructure and computing infrastructure, embedding AI capabilities deep into the connectivity fabric. While still experimental, it highlights the ultimate vision of frictionless connectivity: a network that not only carries data but actively processes and responds to data in real time.
Recent examples from leading telcos and AI industries
To ground the discussion, here are some recent case studies and initiatives (2023–2025) where telecoms operators and AI technology firms have teamed up to deliver Distributed AI solutions, illustrating the industry momentum:
- Verizon (USA) – 5G Edge AI for Enterprises: In December 2024, Verizon announced a collaboration with NVIDIA to integrate the NVIDIA AI Enterprise stack into Verizon’s private 5G networks and Mobile Edge Compute. This allows enterprise customers to run AI applications on-premises with ultra-low latency and high bandwidth via Verizon’s 5G Edge. The plug-and-play solution targets use cases like real-time computer vision, AR/VR and autonomous robots, highlighting benefits such as sub-10ms response times and keeping data local on a secure private network. Verizon’s engineers began demos of this system in early 2025, demonstrating how industries can get frictionless AI processing at the edge without needing their own massive cloud infrastructure.
- Indosat Ooredoo (Indonesia) – AI Centre with NVIDIA: In April 2024, Indosat Ooredoo Hutchison, a leading Indonesian telco, partnered with NVIDIA to invest US$200 million in a new AI Centre in Indonesia. The initiative is aimed at building local AI computing capacity- likely offering GPU-as-a-service- to accelerate AI adoption in Indonesia’s public and private sectors. For NVIDIA, this extends its reach via a telco into an emerging market; for Indosat, it positions the company as an AI infrastructure provider domestically, using its connectivity and data centre assets to host AI workloads for customers.
- Singtel (Singapore) – GPU-as-a-Service Platform: Singtel, Singapore’s incumbent telecoms operator, announced in March 2024 that it will introduce a GPU-as-a-service offering in collaboration with NVIDIA. This service will provide on-demand access to NVIDIA GPU acceleration hosted in Singtel’s data centres, allowing Singaporean enterprises to train models or run AI inference with guaranteed low latency and data sovereignty. By tapping its Tier-1 network and data centre capabilities, Singtel can ensure frictionless connectivity between customers and this AI platform. The move also complements Singapore’s Smart Nation initiatives by providing local AI compute capacity backed by the telco’s reliable network.
- SoftBank (Japan) – AI-RAN and Distributed Compute Trials: SoftBank has been at the forefront of integrating AI into telecom networks. In late 2024, SoftBank detailed plans (as part of the AI-RAN Alliance) to deploy GPU-powered base stations that can handle both 5G RAN processing and edge AI tasks. By equipping cell sites with NVIDIA GPUs, SoftBank aims to offer Distributed AI computing across its mobile network, which could power applications like smart city analytics and V2X (vehicle-to-everything) communications with minimal latency. This approach also improves the 5G network itself, as AI algorithms can optimise radio resources in real-time. SoftBank’s trial, along with a parallel initiative by T-Mobile US, is a case study in converging connectivity and computing in the telecom domain.
- Telenor (Norway) – Sovereign AI Cloud for the Nordics: Telenor, a major telecoms company in Scandinavia, announced a collaboration with NVIDIA in 2024 to build a Sovereign AI cloud infrastructure for Norway and the broader Nordic region. The aim is to support national AI ambitions by providing local, secure AI computing hosted by Telenor. This includes establishing AI compute centres (an AI Factory) and using Telenor’s network to serve government and enterprise customers that need to keep data and AI processing within country borders. The effort underscores how telcos can partner with AI leaders to create region-specific solutions that combine world-class technology with trusted connectivity and compliance with local regulations.
Each of the above examples demonstrates a common theme: partnership between telecoms operators and AI technology providers to deliver distributed AI solutions. Whether it’s integrating cloud AI software into the network edge (Verizon/NVIDIA), co-investing in AI data centres (Indosat/NVIDIA), launching telco-led AI cloud services (Singtel, Telenor), or redesigning network architecture to embed AI (SoftBank), the telco industry is actively embracing the role of AI enabler. These case studies also highlight the global nature of this trend – from North America to Asia to Europe, leading operators are pursuing strategies to make connectivity and computing seamless for their customers.
Challenges and future outlook
While the promise of distributed AI with frictionless connectivity is great, there are significant challenges and open questions that telecoms and enterprise leaders must navigate. One challenge is market uncertainty – it is not yet fully clear which AI use cases will demand distributed setups at scale, and when. Telcos need to identify where the real demand for edge AI or specialised networking will come from: Will hyperscalers push more workloads out to the edge to overcome latency and bandwidth limits? Will enterprises require local AI compute for data governance reasons or to ensure reliability as they adopt AI in critical operations? These questions of “who will require distributed AI compute, and for what uses” remain only partially answered. As a result, operators must balance being prepared for future demand with not over-investing too early in infrastructure that might sit underutilised.
Another challenge is the competitive landscape. The same opportunity that telcos see in AI is also attracting big cloud providers and a host of new entrants. Hyperscalers like Amazon, Microsoft and Google are extending their services to the edge (often directly partnering with telcos or, in some cases, deploying their own private networks), which can blur the lines over who owns the customer for edge AI services. If telcos move too slowly or fail to offer compelling solutions, they risk being relegated to commodity connectivity providers while others reap the value of AI services. Indeed, over the past decade, tech giants captured most of the revenue growth from exploding data and cloud services, while telco revenues stayed flat. To avoid a repeat with AI, telcos will need to be agile and innovative – drawing on their strengths (ubiquitous networks, local presence, operational experience) while possibly partnering in areas where they lack expertise (such as AI frameworks or cloud orchestration software).
There are also technical and operational hurdles. Ensuring truly frictionless connectivity for AI may require upgrades to network architecture (for example, implementing end-to-end slicing and QoS management across both 5G and fibre domains) and significant investment in automation. The intelligent network services discussed earlier are still evolving, and there is not yet a single agreed model for how to best implement and monetise them. Telcos must answer strategic questions about product development vs partnership – for instance, should an operator develop its own edge cloud and networking software stack, or rely on a partner’s platform?- and about business models- such as do they charge premium fees for guaranteed AI-grade networking, or bundle it into broader enterprise offerings?
Developing the right go-to-market approach is tricky: customers might expect connectivity improvements as a given, whereas telcos would prefer to charge based on the value created (for example, improved AI application performance). Operators like BT have begun offering managed services (like combining networking with security in a single package) to add more value, and others might follow suit in packaging AI-oriented connectivity features.
Capital expenditure is another consideration. Some of the initiatives, especially building GPU farms or retrofitting sites with AI hardware, can be costly. The return on investment is not guaranteed, given uncertainties in demand and potential price competition (GPU cloud prices may drop if supply increases). Telcos, traditionally accustomed to investing in long-lived network gear, now have to consider the faster innovation cycle of IT and AI hardware. They may mitigate risk by starting with modest deployments and scaling up as anchor customers are secured. We see this cautious approach in examples like Telefonica, which built specialised AI hubs and is rolling out generative AI services gradually, or Verizon, which often tests new platforms with enterprise partners in pilot projects before wider release.
Despite these challenges, the outlook for AI and connectivity is largely positive. There is a strong consensus that AI will be a major driver of economic value in the coming decade, and networks will need to rise to the occasion. A recent PwC study noted that 75% of executives view AI as a key business advantage, and a majority of companies plan to expand AI use soon. 5G and next-generation networks are projected to enable trillions of dollars of economic output (US$12 trillion by 2035), with AI-enabled devices and applications being a core part of that transformation. These macro trends suggest that investment in AI-friendly infrastructure and frictionless connectivity will pay off by unlocking new services and efficiencies across industries like healthcare, manufacturing, transportation and smart cities.
In the near future, we can expect deeper collaborations between telcos, cloud providers and AI startups. For example, standards bodies and industry alliances might define common interfaces for edge AI services (making it easier to deploy an AI workload that spans a Verizon network and an AWS cloud, for instance). The AI-RAN Alliance and similar groups are early signs of cross-industry players aligning to ensure networks can natively support AI. Additionally, advances in technology such as satellite connectivity or 6G networks could further extend the reach of distributed AI to remote areas, with telcos orchestrating a mix of terrestrial and non-terrestrial networks for ubiquitous AI access.
Distributed AI has the potential to revolutionise how data is processed and insights are generated, but it fundamentally depends on the quality of the connectivity binding the distributed parts together. Frictionless connectivity – networks that are fast, flexible, secure and intelligent – is what allows an AI model split between a cloud server and an edge device to function as one seamless system.
Telecoms operators sit at this critical intersection of connectivity and computation. By investing in fibre optic links, advanced 5G capabilities, edge computing sites and partnerships with AI technology leaders, telcos are transforming from traditional carriers into stewards of the emerging AI ecosystem. The case studies from leading operators around the world demonstrate that this transformation is well underway, with tangible benefits in improved performance and new revenue streams.
For telecoms and AI executives and their technical teams, the imperative is clear: to unlock the full potential of AI, one must architect not just the AI algorithms, but also the nervous system that connects them. This means treating network infrastructure as an integral part of the AI strategy. Those who succeed in making connectivity truly frictionless will enable AI workloads to be placed optimally – whether in a cloud data centre, at a 5G edge hub, or on a device – without concern for delays or bottlenecks. In turn, this will lead to AI systems that are more responsive, reliable and capable of handling sensitive or massive datasets in a distributed fashion.
In summary, distributed AI and frictionless connectivity is a powerful combination poised to drive the next wave of digital innovation. Telcos, with their global networks and local presence, have a unique opportunity to be at the forefront of this wave. By continuing to innovate in network technology and collaborate with AI partners, they can ensure that the processing of AI workloads becomes as seamless as the connectivity that underpins it – to the benefit of businesses and society at large.
As we head toward an increasingly AI-driven future, the partnership between connectivity and intelligence will be a defining factor in shaping that future. The journey has begun, and it is an exciting time for both the telecoms and AI industries to jointly lead in this space.
Marion Webber