Cadence Design Systems has launched its next-generation AI IP and software tools in a bid to address the demand for on-device and edge AI processing.
Cadence Neo Neural Processing Units (NPUs) deliver a range of AI performance in a low-energy footprint, according to the company, and deliver up to 80 TOPS performance in a single core.
The company says the Neo NPUs support both classic and new generative AI models and are able to offload AI/ML execution from host processors—including application processors, general-purpose microcontrollers and DSPs, with an AMBA AXI interconnect.
“While most of the recent attention on AI has been cloud-focused, there are an incredible range of new possibilities that both classic and generative AI can enable on the edge and within devices,” says Bob O’Donnell, president and chief analyst at TECHnalysis Research.
“From consumer to mobile and automotive to enterprise, we’re embarking on a new era of naturally intuitive intelligent devices. For these to come to fruition, both chip designers and device makers need a flexible, scalable combination of hardware and software solutions that allow them to bring the magic of AI to a wide range of power requirements and compute performance, all while leveraging familiar tools. New chip architectures that are optimized to accelerate ML models and software tools with seamless links to popular AI development frameworks are going to be incredibly important parts of this process.”
The company also mentions a “one-tool” AI software solution across Cadence AI and Tensilica IP products for no-code AI development, called NeuroWeave Software Development Kit (SDK).
The Neo NPUs are suited for ultra-power-sensitive devices as well as high-performance systems with a configurable architecture, enabling SoC architects to integrate an AI inferencing solution across a variety of products, the company claims. This includes intelligent sensors, IoT and mobile devices, cameras, hearables/wearables, PCs, AR/VR headsets and advanced driver-assistance systems (ADAS).
According to David Glasco, vice president of research and development for Tensilica IP at CadenceL “For two decades and with more than 60 billion processors shipped, industry-leading SoC customers have relied on Cadence processor IP for their edge and on-device SoCs. Our Neo NPUs capitalize on this expertise, delivering a leap forward in AI processing and performance.”
“In today’s rapidly evolving landscape, it’s critical that our customers are able to design and deliver AI solutions based on their unique requirements and KPIs without concern about whether future neural networks are supported. Toward this end, we’ve made significant investments in our new AI hardware platform and software toolchain to enable AI at every performance, power and cost point and to drive the rapid deployment of AI-enabled systems.”
The Neo NPUs and the NeuroWeave SDK support Cadence’s system design strategy by enabling pervasive intelligence through SoC design.
AI IP | Cadence | generative AI | host processor | NPUs