The Rise of Local AI: A Game-Changer for PC Gaming and Hardware

01/06/2026

Amidst concerns over memory scarcity and the pervasive discussion surrounding AI, a promising trend is emerging within the PC hardware sector: a growing emphasis on localized artificial intelligence. This shift, championed by industry leaders such as AMD and Nvidia, suggests a future where AI processing primarily occurs directly on user devices rather than relying heavily on cloud-based services. This could bring substantial benefits to the PC gaming community by empowering more robust on-device AI functionalities and potentially stimulating demand within the memory market, offering a more optimistic outlook for consumers.

AMD's commitment to local AI was notably showcased at its CES 2026 keynote, where CEO Dr. Lisa Su dedicated considerable attention to the concept. Although much of the presentation centered on the company's new mobile processors, specifically the Ryzen AI 400-series (an evolution of the Strix Point 300-series with clock speed improvements), the underlying message remained clear: hardware manufacturers aim for widespread device usage, and local AI capabilities are a significant selling point in the current AI-driven landscape. This aligns with the long-term vision of an 'AI PC,' a term Microsoft introduced with the launch of Copilot in mid-2024. While early AI PCs showed limited capabilities, the industry has since witnessed explosive growth in cloud AI, drawing in hundreds of billions of dollars. Dr. Su projects an exponential increase in global computing power, aiming for a 100-fold expansion over the next few years, reaching over 10 yottaflops within five years—a staggering 10,000 times the compute capacity available in 2022.

Further emphasizing the importance of on-device AI, Dr. Su also featured Ramin Hasani, cofounder and CEO of Liquid AI, during her keynote. Liquid AI specializes in developing AI models optimized for efficient processing on various devices. Their LFM2.5 model, utilizing a mere 1.2 billion parameters, reportedly outperforms more complex models like DeepSeek and Gemini Pro 2.5 when run locally. Hasani highlighted Liquid's objective to significantly reduce AI's computational overhead without compromising quality, thereby enabling powerful, frontier-level AI capabilities directly on user hardware.

Nvidia, while less explicit than AMD regarding local AI, has also provided hints about this emerging trend. A slide from their GeForce On Community Update indicated a convergence where 'PC models [are] closing the gap with cloud.' Although specific metrics were absent, the visual representation suggested that local AI models are increasingly approaching the performance levels of their cloud-based counterparts. This development carries significant implications, as it implies a potential reduction in the necessity for cloud AI subscriptions and inference services, although AI training will likely remain a domain for large-scale data centers. For Nvidia, this presents a dual advantage: their GPUs are central to cloud computing infrastructure, and their consumer graphics cards are ideally positioned to accelerate local AI tasks.

The increasing focus on local AI holds considerable promise for individual consumers. Running AI models directly on personal devices necessitates substantial memory, whether in the form of system RAM (as seen in platforms like Strix Halo) or abundant VRAM on graphics cards (such as the RTX 5090). Consequently, this shift could compel memory suppliers to prioritize allocating resources for consumer-grade hardware, potentially alleviating current memory shortages. This prospect, while exciting, requires a degree of cautious optimism, as the actual implementation and market impact remain to be fully seen.