Generating a Workload Validation Script
A variety of techniques are available to confirm the platform performance including, but not limited to, any combination of:
Running the real-time and non-real-time workloads on the platform and inspecting the workload logs.
Inspecting performance counters, such as CPU utilization, memory utilization, and I/O bandwidth, to confirm that the platform isn’t oversubscribed or undersubscribed, or skewed in load distribution.
For user-interface intensive workloads, such as those requiring manual input and plotting graphs, you might interactively confirm a lack of delays or glitches in the input or output operations.
Workloads can have cycle times that vary from microseconds to milliseconds, and are typically executed millions or even billions of times in a preproduction environment. These executions confirm that the platform performance meets expectations and any exceptions are within acceptable tolerances. Since real-time workloads are expected to execute continuously without an end point, developers typically use scripts to automate performance validation. The scripts run the workloads and mine the collected logs and/or performance counters. Without automation, the tasks to inspect the output of millions of executions of workloads and confirm platform performance would be manual.
The workload validation script is expected to execute all real-time and non-real-time workloads on the platform in parallel and for as many iterations as needed, possibly millions or billions, to confirm that the platform performance meets expectations, and that any exceptions are within acceptable tolerances. Automation to run test workloads may already exist to analyze your system, and you may only need to create a script to read those results. The script would inspect any needed logs, performance counters, and other metrics that are already collected, and provide confirmation to the data streams optimizer if the tuning was successful.
You might manually observe performance confirmations based on interactive experiences, such as for graphics performance. From your observations, decide if the tuned platform performance meets your expectations. For interactive evaluations, your script could, for instance:
Inspect a text file that you create upon completing the interactive inspection.
Use another method to provide manual input of the pass or fail result based on your criteria.
Try a different tuning setup, even if the results meet your expectations, and repeat the validation process.
If the tuning was unsuccessful, the tool would iteratively attempt to more aggressively tune the platform until a successful tuning can be identified and confirmed, if such a tuning is available.
Use these workload validation script design tips when reviewing your current automation, or when testing the platform performance with a new script:
The script should execute all real-time and non-real-time workloads to represent a production scenario. The profile of these workloads should represent expected modulations in production, such as increased/decreased consumption of resources and interference patterns.
The script should collect all needed measures to determine success of tuning. These measures may include performance counters and logs from applications.
Where interactive input is needed to determine the success of tuning, such as GUI application performance, the script may introduce user-interface prompts and pauses to collect user inputs.
The script should parse through all collected measurements and judge the tuning success to communicate success or failure to the data streams optimizer.
For a scenario in which complete system performance profiling with various tuning settings applied is desired, the workload validation script should collect measurements for every tuning applied and return failures to the data streams optimizer no matter the actual outcome. The measurements can subsequently be analyzed to determine which tuning attempt resulted in optimal performance. A correlation with the data streams optimizer logs for that tuning attempt yields the exact requirements to be provided in the input requirements file for obtaining that desired optimal performance, which you can subsequently run and verify.
You may choose to not create, or may be unable to create, a workload validation script, and instead manually observe the target platform performance to confirm tuning success. In such a scenario, the workload validation script may simply consist of a “pause” command, which would allow you to perform needed observations and then return a success or failure signal to the data streams optimizer through the prescribed interface.