AN 1011: TinyML Applications in Altera FPGAs Using LiteRT for Microcontrollers
ID
848984
Date
9/29/2025
Public
1. Overview
2. Preparing LiteRT Inference Model
3. Generating Nios® V Processor System
4. Generating Arm Processor System
5. Programming and Running
6. Nios® V Processor with TinyML Design Example
7. Appendix
8. Document Revision History for the AN 1011: TinyML Applications in Altera FPGAs Using LiteRT for Microcontrollers
2.4.1. Converting into LiteRT Model
A good starting point is converting a TensorFlow model to a LiteRT model without quantization, which generates a 32-bit floating-point LiteRT model.
# Convert the model from Tensorflow to LiteRT model converter = tf.lite.TFLiteConverter.from_keras_model(model) tflite_model = converter.convert()
Alternatively, you can use full integer-only quantization to reduce the model size and increase processing speed. However, this may impact the model's accuracy.