Deploying AI Models at the Edge

IBM creates an AI chip to run and train deep learning models

Categories Edge Computing News  |  Hardware
IBM creates an AI chip to run and train deep learning models

IBM has announced an upgraded version of the IBM AI Telum processor to streamline the development of enterprise-quality, industrial-scale deep learning models. The Artificial Intelligence Unit is an application-specific integrated circuit. It can run any deep learning task, for example, processing spoken language, words or images on a screen.

IBM devoted five years to understanding the design of a system-on-chip that could customize for developing modern AI models. IBM’s Artificial Intelligence Unit (AIU) is computational processing hardware designed to handle AI calculations involving matrix and vector multiplication. The plug-and-play Artificial Intelligence Unit is responsible for running and training deep learning models faster and more efficiently than a CPU.

Why did IBM announce the Artificial Intelligence Unit?

IBM has stated that the current CPU and graphics processing units (GPUs) cannot keep up with the development of general-purpose deep learning models. Most AI applications require optimized computing to sustain various matrix and vector multiplication operations, commonly seen in deep learning development. Even though AI models have become more accurate and efficient over time, the hardware needed to train them still falls short in comparison.

IBM says that while some AI models need servers in the cloud or on edge devices, there has been a lot of money invested in creating new AI hardware platforms recently.

In 2019, IBM established its AI Hardware Center to increase hardware efficiency for AI by 2.5 fold yearly. By 2029, the company aims to train and run AI models one thousand times faster than it could three years ago. 

IBM had the following to say in a blog post about the topic:

“Deploying AI to classify cats and dogs in photos is a fun academic exercise. But it won’t solve the pressing problems we face today. For AI to tackle the complexities of the real world — things like predicting the next Hurricane Ian, or whether we’re heading into a recession — we need enterprise-quality, industrial-scale hardware.”

Understanding the new features

IBM states that the CPU’s flexibility and high precision make it ideal for use with general-purpose software applications. That said, CPUs can have trouble training and running deep learning applications requiring many AI operations completed concurrently. 

AI-customized hardware solutions do not require as high precision as CPUs. For example, they do not calculate trajectories for landing a spacecraft on mood or estimate the number of hairs on a cat.

As previously mentioned, IBM researchers found two paths to increase AI hardware efficiency: via low-precision AI models and streamlining AI workflows. IBM’s approximate computing can reduce the 32-bit floating point arithmetic to bit-formats that hold a quarter as much information. This does not affect the AI model’s accuracy. The second path involves using AI modules to streamline workflows. The IBM AIU sends data directly from one compute engine to the next.

The IBM AIU features 32 processor cores and resembles the AI core embedded in the existing Telum chip. The IBM Artificial Intelligence Unit has 23 million transistors and a process technology of 5nm. This compares with the 22 million transistors in the Telum module, which uses 7nm technology. That said, IBM still needs to provide detailed specifications for the Artificial Intelligence Unit. However, it says it hopes to announce its release soon.

Article Topics

 |   |   |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Featured Edge Computing Company

Edge Ecosystem Videos

Automating the Edge

“Automating

Deploying AI Models at the Edge

“Deploying

Latest News