AN 1011: TinyML Applications in Altera FPGAs Using LiteRT for Microcontrollers
ID
848984
Date
9/29/2025
Public
1. Overview
2. Preparing LiteRT Inference Model
3. Generating Nios® V Processor System
4. Generating Arm Processor System
5. Programming and Running
6. Nios® V Processor with TinyML Design Example
7. Appendix
8. Document Revision History for the AN 1011: TinyML Applications in Altera FPGAs Using LiteRT for Microcontrollers
2.3.5. Saving and Loading Model
Once you train a highly accurate model, Altera recommends that you save the model data for future use. Training a model consumes a lot of time, and the final accuracy can vary with each session. By saving the trained model, you can efficiently load it whenever you need it.
# Save Model model.save('lenet.keras') # Load Model model = tf.keras.models.load_model('lenet.keras')