AMD explains its AI PC strategy
Over the past few years, the concept of “AI PCs” has gone from sounding like a desperate attempt to revive the computer industry, to something that could actually change the way we live with our PCs. To recap, an AI PC is any system running a CPU that’s equipped with a neural processing unit (NPU), which is specially designed for AI workloads. NPUs have been around for years in mobile hardware, but AMD was the first company to bring them to x86 PCs with the Ryzen Pro 7040 chips.
Now with its Ryzen AI 300 chips, AMD is making its biggest push yet for AI PCs — something that could pay off in the future as we see more AI-driven features like Microsoft’s Recall. (Which, it’s worth noting, has also been dogged with privacy concerns and subsequently delayed.) To get a better sense of how AMD is approaching the AI PC era, I chatted with Ryzen AI lead Rakesh Anigundi, the Ryzen AI product lead and Jason Banta, CVP and GM of Client OEM. You can listen to the full interview on the Engadget Podcast.
My most pressing question: How does AMD plan to get developers onboard with building AI-powered features? NPUs aren’t exactly a selling point if nobody is making apps that use them, after all. Anigundi said he was well aware that developers broadly “just want things to work,” so the company built a strategy around three pillars: A robust software stack; performant hardware; and bringing in open-source solutions.
“We are of the philosophy that we don’t want to invent standards, but follow the standards,” Anigundi said. “That’s why we are really double clicking on ONNX, which is a cross platform framework to extract the maximum performance out of our system. This is very closely aligned with how we are working with Microsoft, enabling their next generation of experiences and also OEMs. And on the other side, where there’s a lot of innovation happening with the smaller ISVs [independent software vendors], this strategy works out very well as well.”
He points to AMD’s recently launched Amuse 2.0 beta as one way the company is showing off the AI capabilities of its hardware. It’s a simple program for generating AI images, and runs entirely on your NPU-equipped device, with no need to reach out to OpenAI’s DallE or Google’s Gemini in the cloud.
AMD’s Banta reiterated the need for a great tool set and software stack, but he pointed out that the company also works closely with partners like Microsoft on prototype hardware to ensure the quality of the customer experience. “[Consumers] can have all the hardware, they can have all the tools, they can have all the foundational models, but making that end customer experience great requires a lot of direct one to one time between us and those ISV partners.”
In this case, Banta is also referring to AMD’s relationship with Microsoft when it comes to building Copilot+ experiences for its systems. While we’ve seen a handful of AI features on the first batch of Qualcomm Snapdragon-powered Copilot+ machines, like the new Surface Pro and Surface Laptop, they’re not available yet on Copilot+ systems running x86 chips from AMD and Intel.
“We’re making that experience perfect,” Banta said. At this point, you can consider Ryzen AI 300 machines to be “Copilot+ ready,” but not yet fully Copilot+ capable. (As I mentioned in my Surface Pro review, Microsoft’s current AI features are fairly basic, and that likely won’t change until Recall is officially released.)
As for those rumors around AMD developing an Arm-based CPU, the company’s executives, naturally, didn’t reveal much. “Arm is a close partner of AMD’s,” Banta said. “We work together on a number of solutions across our roadmaps… As far as [the] overall CPU roadmap, I can’t really talk about what’s coming around the corner.” But given that the same rumor points to NVIDIA also developing its own Arm chip, and considering the astounding performance we’ve seen from Apple and Qualcomm’s latest mobile chips, it wouldn’t be too surprising to see AMD go down the same Arm-paved road.