AN 1011: TinyML Applications in Altera FPGAs Using LiteRT for Microcontrollers

ID 848984
Date 9/29/2025
Public
Document Table of Contents

2.4.5. Evaluating the LiteRT Model

Since the LiteRT model is generated without quantization, the final accuracy is expected to be preserved, which is the same as 0.9872 from the TensorFlow model.

It's important to check how much accuracy is lost after using post-training quantization. If the loss is significant, consider using quantization-aware training.
# Set the test input and run
interpreter.set_tensor(input_details[0]["index"], x_test)
interpreter.invoke()

# Get the result and check its accuracy
output_data = interpreter.get_tensor(output_details[0]["index"])

a = [np.argmax(y, axis=None, out=None) for y in output_data]
b = [np.argmax(y, axis=None, out=None) for y in y_test]

accuracy = (np.array(a) == np.array(b)).mean()
print("TFLite Accuracy:", accuracy)
Figure 7. Classification Accuracy of LiteRT Model on Test Dataset