AN 1011: TinyML Applications in Altera FPGAs Using LiteRT for Microcontrollers
ID
848984
Date
9/29/2025
Public
1. Overview
2. Preparing LiteRT Inference Model
3. Generating Nios® V Processor System
4. Generating Arm Processor System
5. Programming and Running
6. Nios® V Processor with TinyML Design Example
7. Appendix
8. Document Revision History for the AN 1011: TinyML Applications in Altera FPGAs Using LiteRT for Microcontrollers
2. Preparing LiteRT Inference Model
The LiteRT development workflow involves identifying a Machine Learning (ML) problem, choosing a model that solves that problem, and implementing the model on embedded devices. LiteRT is designed to run machine learning models on embedded devices with only a few kilobytes of memory. It doesn't require operating system support, any standard C or C++ libraries, or dynamic memory allocation.
The following example illustrates how to prepare a LiteRT model for digit classification. It outlines the steps needed to prepare the model in a TensorFlow Python environment before converting it into a LiteRT model.
Import the following Python libraries at the start of the Python script:
import matplotlib.pyplot as plt import tensorflow as tf import numpy as np import random
Note: Altera utilizes the following software version for the example in this application note.
- Python 3.11.12
- tensorflow 2.18.0
- numpy 2.0.2