AN 1011: TinyML Applications in Altera FPGAs Using LiteRT for Microcontrollers
ID
848984
Date
9/29/2025
Public
1. Overview
2. Preparing LiteRT Inference Model
3. Generating Nios® V Processor System
4. Generating Arm Processor System
5. Programming and Running
6. Nios® V Processor with TinyML Design Example
7. Appendix
8. Document Revision History for the AN 1011: TinyML Applications in Altera FPGAs Using LiteRT for Microcontrollers
2.4.5. Evaluating the LiteRT Model
Since the LiteRT model is generated without quantization, the final accuracy is expected to be preserved, which is the same as 0.9872 from the TensorFlow model.
It's important to check how much accuracy is lost after using post-training quantization. If the loss is significant, consider using quantization-aware training.
# Set the test input and run interpreter.set_tensor(input_details[0]["index"], x_test) interpreter.invoke() # Get the result and check its accuracy output_data = interpreter.get_tensor(output_details[0]["index"]) a = [np.argmax(y, axis=None, out=None) for y in output_data] b = [np.argmax(y, axis=None, out=None) for y in y_test] accuracy = (np.array(a) == np.array(b)).mean() print("TFLite Accuracy:", accuracy)
Figure 7. Classification Accuracy of LiteRT Model on Test Dataset