AI Inference with Intel® FPGA AI Suite
Kevin Drake
Intel Corporation
Stevenson Hall 1300
12:00 PM
- 12:50 PM
Intel® FPGAs enable real-time, low-latency, and low-power deep learning inference combined with the following advantages: I/O flexibility, Reconfiguration, Ease of integration into custom platforms and Long lifetime
Intel® FPGA AI Suite was developed with the vision of ease-of-use of artificial intelligence (AI) inference on Intel® FPGAs. The suite enables FPGA designers, machine learning engineers, and software developers to create optimized FPGA AI platforms efficiently.
Utilities in the Intel FPGA AI Suite speed up FPGA development for AI inference using familiar and popular industry frameworks such as TensorFlow* or PyTorch* and OpenVINO toolkit, while also leveraging robust and proven FPGA development flows with the Intel Quartus Prime Software.
The Intel® FPGA AI Suite tool flow works with the OpenVINO toolkit, an open-source project to optimize inference on a variety of hardware architectures. The OpenVINO toolkit takes Deep Learning models from all the major Deep Learning frameworks (such as TensorFlow, PyTorch, Keras*) and optimizes them for inference on a variety of hardware architectures, including various CPUs, CPU+GPU, and FPGAs.