In the current parlance, ‘AI’ doesn’t mean ‘artificial intelligence’ anymore. Most use it as a synonym for Large Language Models (LLMs), and if there is another definition, it is generally an extension of the machine learning (ML) techniques that have been applied in automation and analytics functions.
At MWC, the former definition was much less aggressively present than in 2025, and there were clearer examples of the latter. Still, some were making plays for the ‘artificial intelligence’ angle, but mostly from the infrastructure perspective – creating the vast computing arrays that they believe will power these next-generation workloads.
SK Telecom
SK Telecom was one of these firms, hosting a sparsely attended analyst roundtable to outline its AI data center (AIDC) strategy. Broadly, SKT plans to become a national infrastructure provider in South Korea, developing a portfolio of over a gigawatt (GW) of computing capacity. It already has eight data centers, and is developing two more – with one in Ulsan being built with AWS, which will serve as AWS’ first ‘AI Zone’.
By 2030, Lee Jae-shin, Head of AI Business Development, explained that SKT expects 40% of workload (when measured in GW) to be occupied by inferencing tasks. “We want to be providing the GPUaaS infrastructure, the AI model, and the application, not simply a co-located facility with just GPUaaS,” said Lee.
Jeong Min-young, Head of AI DC Solution, via a live translator, explained how SKT was building its HAEIN GPU cluster, from 1,032 Nvidia B200 Blackwell GPUs. “We have developed the software ourselves, with a full-stack AI option plus our clustering capabilities. We are aiming to transform from co-location to a full-stack AI computing business.”
“To optimize inferencing performance, we must optimize via the rack design, and employ liquid cooling and facility management, for integrated data center observability,” said Jeong.
On the topic of digital sovereignty, raised when discussing South Korean enterprise and government interest in the strategy, we asked if sovereignty to SKT meant using entirely South Korean hardware – aware that it had ties to the Rebellion silicon startup.
“It is of course not realistic to not use Nvidia GPUs,” said Jeong. “But we are a huge investor in Rebellion, and aim to mix and match NPUs with GPUs, managing the disaggregation via our software. To meaningfully do this from the sovereignty perspective is to invest in NPUs from Rebellion, and optimize our proprietary models for different hardware and software.”
SKT’s parent, SK Group, also owns SK Hynix, a significant memory manufacturer. “A bottleneck in AI is memory, and so we hope to address that with a scaled solution,” said Jeong. “Inferencing demand grew 100x last year, and will shift really quickly. For our investments, we need to target the maximum levels, not simply averages.”
Lee stressed the point that many AI startups still carry out their model training in the US, and that this undermines many digital sovereignty arguments from the off. SKT would like to provide the training workloads.
So, do all telcos have to become inferencing providers? In SKT’s eyes, the answer is yes.
“More sovereign models means more inferencing workloads, and if you want to do Infrastructure-as-a-Service (IaaS), demand will grow exponentially. It is difficult to go step by step, and you need a phased approach. But, at the end, yes – telcos should reach that position,” said Jeong.
After the roundtable, SKT announced a partnership with Supermicro (now in quite a lot of hot water after the US DoJ has charged executives with bypassing Chinese export restrictions) and Schneider Electric (pleasantly boring, and very French, in comparison), to create a “pre-fabricated modular AIDC model that integrates AI computing servers with power and cooling infrastructure into a single pre-manufactured module.”
“This building-block approach is designed to significantly shorten construction timelines, improve cost efficiency, and help alleviate supply bottlenecks compared to conventional data center construction methods. Under the agreement, SKT will contribute its AIDC operational expertise, Supermicro will provide high-performance GPU-optimized servers tailored to AI workloads, and Schneider Electric will deliver advanced mechanical, electrical, and plumbing (MEP) infrastructure capabilities to support large-scale AI deployments,” noted the release.
Mycom
The last time we checked in with Mycom, it was still going by the Mycom OSI moniker. It still provides software-based network automation and assurance products, after a gentle rebrand, but things have changed in recent years, with the introduction of AI, explained Chief Marketing Officer Sandeep Raina.
“AI has been a journey for the operators. It first began being introduced into their operations around three years ago, with the focus on reducing costs and improving efficiency. The industry was right about that, but it did take time for the initial use cases to mature, waiting for the AI technologies to reach maturity. We moved from machine learning (ML) to natural language processing (NLP) to now the improvements in performance enabled by LLMs,” said Raina.
“For ML, the use cases were always based around pattern analysis. NLP made it much easier to track and tie patterns into support tickets, but the next step is fixing these identified problems,” said Raina, pointing to the work being done in the TM Forum.
“In theory, that AI-based mediation is closed-loop automation, but this is only partly true. The LLMs are useful for anomaly detection, but the actual fixes in the early days were done by third-party orchestrators, which the network OEMs often did not like working with.
The next step would be the agent-to-agent (A2A) communication, where the anomaly detection agent can speak directly to the OEM’s agent,” said Raina.
“This would speed up fixes, and enable Level 4 automation. The MNOs would have to have their own Mycom agents, running inside their networks, keeping that data internally secure. They would have to do some checks on the A2A communication, in a semi-human-in-the-loop process, initially, but in a fully closed-loop system, network optimization and fixes would essentially work like a self-healing network. The MNOs wouldn’t see the problem – they wouldn’t know that it existed,” said Raina.
“Still, vendors want to be able to point to the fixes they have made, but these OEMs don’t want to actually run the networks for the MNOs,” noted Raina, adding that Anthropic’s Claude was the more popular choice, but that Mycom was essentially agnostic on the choice of LLM.
“MNOs are always talking money – of capex versus opex. Most of our products are about reducing opex, but what’s stopping them from using our software to do capacity reorganizations, to reduce capex? How quickly could they automate the entire stack, as a money-saving measure,” Raina posited.
Motive
Another Lumine Group asset, Motive was acquired in 2024 after Nokia carved out its device management assets – including the Service Management Platform, Home Device Management, Impact IoT, Impact Mobile, and iSIM Secure Connect.
Pedro Costa, SVP and GM for Service Management Platform, joined post-acquisition, but explained that Motive has been using ML since the Alcatel-Lucent days.
“It’s used by most customers, but after the Lumine acquisition, we have been introducing AI as the natural evolution for ML,” said Costa.
“We use the GenAI assistant to give service recommendations, and it’s like having a subject matter expert sat next to you. But now with the agentic approach, the chatbot can interact directly with the systems, and carry out predictive care too. We can use whatever customer data is available, analyze it, and look for patterns that lead to problems. That’s quite important to operations, now,” said Costa.
“The technology is agnostic to LLMs, and several customers want to explicitly use their own local LLMs. We can train those models, and are open to them, but they’re used for data protection essentially. We will use synthetic data for training, in these cases, but use the real existing device management workflows,” noted Costa.
“The EU AI Act means you have to be very strict on internal processes now, ensuring that you don’t touch customer data,” warned Costa. “But the majority of our customers are in favor of this approach.”
Motive has support for the now-open-sourced MCP interface, said Costa. “We can expose the legacy system using MCP, if needed, but AI-to-AI integrations are easier. We aim to be modular and open here,” he said.
“We’re on our first true AI deployments now, but are considering if we could become a platform that handles all the data – as a hosting service of sorts. We don’t see a reason why we can’t, but the worry is the data getting into the LLM, and so we use synthetic data to avoid that risk,” said Costa.
When asked about the operator enthusiasm for AI, Costa said that Motive sees a lot of AI projects that don’t succeed, yet the operators still want it. “But they do want to see it in a live environment before they invest, and there are still a lot of security questions, so we will do a PoC first to show off its capabilities.”
“But AI is a natural evolution of ML for us. For customers, they often see internal AI investments not paying off, or major projects not meeting their initial targets. Now, they are much more selective – waiting to see if the technology will work. But this means that they are not training staff, and so there is a skills gap emerging, where the leaders are moving further ahead,” said Costa, stressing that AI made the most sense in applications like customer experience and support.
Ocient
Based in Chicago, Ocient is a data specialist that has begun targeting the telco market. “A massive parallel processing database is the foundation of what we do,” explained Ricardo Velhuco, the firm’s Field CTO, who was previously Nokia’s Head of Cognitive Engineering.
Velhuco said that the firm had spent around five years purely on R&D, since being founded back in 2016 – recently rewriting the entire codebase to get better performance. Running on NVMe storage, it is very good at petabyte-scale data acquisition, said Velhuco.
“Some of the biggest contracts we have in the telco space are for ‘lawful’ data collection, to be processed and disclosed to the relevant authorities when they request it,” said Velhuco.
“We also use an MCP server to help with BSS automation and expansion, and the IoT space is surprisingly big too. The biggest contract there is for e-scooter telemetry and battery charging optimization,” said Velhuco.
There are no public telco customers, but Velhuco did note that the biggest MNO in the US is one, and the second-largest in the UK is another. Joining those dots is not difficult, of course.
The business model is entirely SaaS, he explained, but some customers do prefer to receive a version run from Ocient’s cloud.
“Data acquisition is usually focused on pulling together all the network telemetry and customer data records, and then running analytics on that,” said Velhuco. “The AI layer here is a way to link to local or cloud data, and tie into public cloud tools.”
“Everyone is collecting data and storing it, but no one can do all the data,” warned Velhuco. “Scale is the challenge there, and why we specialize in terabits-per-second and petabytes of data volume.”
“In terms of mood, operators are in different stages. Some are more advanced and have data, but they want to manage the cost of processing it in the public cloud. The least advanced are still trying to understand how to acquire and process this data, and usually crash out when they see the cost of these cloud-based systems,” said Velhuco.
“Everybody is talking about AI, and there are some that have been trying, but the problem is building and scaling a system costs money – it’s why so many are migrating their data back to on-prem storage from the cloud, choosing to run just the AI piece in the cloud. Some don’t understand the problem, and so rely on things like the TM Forum to drive progress,” said Velhuco.
Ocient can store around 12 petabytes of data across 8-10 servers, occupying about one server cabinet of space. This provides around 50 Tbps of data acquisition capacity, and is deployed using commercial off-the-shelf (COTS) servers. Velhuco noted that installations can be both much larger and much smaller – with some running as VMs.