Nvidia and Lenovo join forces to create "AI Cloud Gigafactories"
The two companies are betting on rising demand for local AI deployment, both in edge devices and in hyperscale environments for enterprise clients
Published on Jan. 09, 2026

By joining forces for this “AI Cloud Gigafactory” initiative, Lenovo and Nvidia are making an ambitious move. It can be described as an attempt to industrialise the physical deployment of artificial intelligence.
The chairmen of both companies took the stage at CES in Las Vegas on January 6, 2026, to announce the collaboration.
“As AI transforms every industry, companies in every country will build or rent AI factories to produce intelligence,” said Jensen Huang, founder and CEO of Nvidia, at the event, according to Lenovo press release.
“Together, Nvidia and Lenovo are delivering full-stack computing platforms that power agentic AI systems—from the cloud and on‑premises data centers to the edge and robotic systems.”
In the AI era, value is no longer measured by compute alone, but also by how fast it delivers results.
The aim of the partnership is to provide scalable AI factory designs that enable advanced AI environments to be deployed as quickly as possible.
“In the AI era, value is no longer measured by compute alone, but also by how fast it delivers results,” said Yuanqing Yang, chairman and CEO of Lenovo.
By matching Lenovo’s computing hardware with Nvidia’s GPUs and software, both companies promise to build what they call an “inferencing‑optimised infrastructure. They are betting that enterprises need AI solutions that can turn massive amounts of data into insights the moment that data is created.
It remains to be seen how many cloud service providers and large enterprises will opt for such “off‑the‑rack” AI cloud products to scale up their AI infrastructure, but given the market positions of both companies, the move is ambitious yet realistic.
Lenovo also used its CES presentation to stress its commitment to a “hybrid AI” philosophy, meaning the fusion of personal, enterprise, and public intelligence.
A core component of this concept is a new AI platform called Qira that runs across PCs, tablets, phones, wearables, and other devices from Lenovo and Motorola.
Qira, positioned as a “personal super‑intelligent agent,” is meant to enable devices that can deliver intelligence at the edge, only connecting to the cloud and larger AI models when absolutely necessary.
The devices need no separate apps, maintaining context and recognizing user preferences as users move from Lenovo laptops to smart glasses or Motorola phones.
This architecture makes it possible to keep sensitive or contextual data on personal devices or in enterprise systems, where it is processed by smaller AI models. These systems are designed to only selectively call on larger public models when needed, Lenovo said.
This is reminiscent of Apple's "on-device plus private cloud compute" approach. Lenovo, like Apple, argues that its hybrid AI design helps keep data on‑device and therefore private whenever possible.
Lenovo goes one step further than Apple by stressing its three-layer narrative (personal, enterprise, and public) and by targeting both PCs and data-centre infrastructure.
The “AI Cloud Gigafactories” are designed to mix on‑premises and cloud resources for fast inferencing at scale. The idea is to unite data centers, the cloud, and edge devices in a single AI orchestration layer.
Lenovo will have to be very transparent in future about where it keeps data and how it processes it, analysts noted.
Lenovo is clearly making a play to leverage its vast product portfolio and growing suite of AI solutions for both consumers and enterprise clients. One example presented at CES was the “Lenovo ThinkSystem SR675i,” which, according to the company, is a “powerhouse‑performance AI inferencing server built to run full LLMs anywhere, with massive scalability, for the largest workloads and accelerated simulation in manufacturing, critical healthcare, and financial services environments."
