Automating the Edge with Robotics

Efinix unveils a TinyML platform to speed edge AI workloads on FPGAs

Categories Edge Computing News  |  Hardware
Efinix unveils a TinyML platform to speed edge AI workloads on FPGAs

Efinix, known for developing FPGAs and RISC-V system-on-chips, has released a TinyML platform. The company says this solution will speed up edge computing technology adoption and process heavy AI workloads on efficient FPGAs. 

Efinix says they designed the platform to speed up AI models on the Sapphire RISC-V system-on-chip. The Efinix TinyML platform is based on open-source TensorFlow Lite for microcontrollers with a C++ library running on the Sapphire RISC-V SoC and the Efinix TinyML accelerator. 

TensorFlow Lite models are a quantized version of the standard TensorFlow models. Using a library of functions enables them to run on microcontrollers at the edge. The Efinix TinyML platform employs TensorFlow Lite models and Sapphire core’s custom instruction capabilities to speed them up in the FPGA hardware ecosystem. The company says it can achieve high performance while maintaining low power and a smaller footprint. 

“Our TinyML Platform harnesses the potential of our high performance, embedded RISC-V core combined with the efficiency of the Efinix FPGA architecture and delivers them intuitively to the designer, speeding time to market and lowering the barrier to AI adoption at the edge,” stated Mark Oliver, the Efinix VP of marketing.

The following are some advantages of this TinyML platform, according to the company:

  • A flexible AI solution through configurable Sapphire RISC-V system-on-chip.
  • An Efinix TinyML accelerator.
  • An optional user-defined accelerator.
  • A hardware accelerator socket for various applications. 

The platform supports AI inferences that the TensorFlow Lite Micro library supports. It also is capable of multiple acceleration options with different performances and designs to speed up the overall AI inference deployment, the company says. 

Apart from the traditional acceleration strategies, Efinix offers a pre-defined hardware accelerator socket connected to direct memory access controllers and an SoC slave interface for data transfer and CPU control. Users can use this for pre and post-processing of AI workloads. Users can also leverage an optional user-defined accelerator to speed up other compute-intensive operators.

Efinix developed a Sapphire system-on-chip based on the VexRiscv core, built on RISC-V instruction set architecture. The Sapphire system-on-chip is a user-configurable, high-performance hardware platform with an optional memory controller. 

The user can choose which peripherals are required for the applications by configuring the system-on-chip in the Efinity IP manager. The VexRiscv processor has a 6-stage pipeline: fetch, injector, decode, executive, memory and write back.

“We are seeing an increasing trend to drive AI workloads to the far edge where they have immediate access to raw data in an environment where it is still contextually relevant,” Oliver added. 

Article Topics

 |   |   |   |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Featured Edge Computing Company

Edge Ecosystem Videos

Automating the Edge

“Automating

Deploying AI Models at the Edge

“Deploying

Latest News