4.1. Using the FPGA AI Suite Docker* Image
The FPGA AI Suite Docker* image is intended to run with a Docker* client running on a Microsoft* Windows* or Linux* system. Docker* clients on macOS* operating systems are not supported. This containerized version of FPGA AI Suite enables easy and quick access to the various tools in FPGA AI Suite.
- You cannot program an FPGA bitstream from within the container.
- The container cannot act as the host machine for the FPGA device.
All prerequisites to running inference, such as building FPGA bitstreams, are possible in the FPGA AI Suite Docker* image. However, if you want to use an FPGA AI Suite design example within the container, you might need to install additional dependencies for the design example.
The quick-start tutorial demonstrates the ease of use and ease of deployment of a deep learning model to the edge. The tutorial presents the Memory-to-Memory (M2M) Variant Design execution model that provides a dla_benchmark interface to the inference engine, similar to the PCIe design examples. It also introduces the Streaming-to-Memory (S2M) Variant Design execution model. The S2M model uses the FPGA SoC ARM CPU as the streaming data source that streams data to a layout transform on the FPGA device. The S2M design example illustrates a recommended system architectures for any streaming source.