AN 1011: TinyML Applications in Altera FPGAs Using LiteRT for Microcontrollers
ID
848984
Date
9/29/2025
Public
1. Overview
2. Preparing LiteRT Inference Model
3. Generating Nios® V Processor System
4. Generating Arm Processor System
5. Programming and Running
6. Nios® V Processor with TinyML Design Example
7. Appendix
8. Document Revision History for the AN 1011: TinyML Applications in Altera FPGAs Using LiteRT for Microcontrollers
2.4.4. Loading a LiteRT Interpreter
Setup the LiteRT Interpreter to test the newly converted LiteRT model.
# Load the LiteRT model in TFLite Interpreter interpreter = tf.lite.Interpreter(model_path="lenet.tflite") # Get input and output tensors. input_details = interpreter.get_input_details() output_details = interpreter.get_output_details() # Adjust the model interpreter to take 10,000 inputs at once instead of just 1 interpreter.resize_tensor_input(input_details[0]["index"], (x_test.shape[0], rows, cols, 1)) interpreter.resize_tensor_input(output_details[0]["index"], (y_test.shape[0], 10)) interpreter.allocate_tensors() # Get input and output tensors. input_details = interpreter.get_input_details() output_details = interpreter.get_output_details()