Mohammad Bidmeshki, INT31
Christian Wachsmann, INT31
Validating the correctness of hardware designs is essential to delivering functional and reliable products. This validation occurs throughout the product development lifecycle, with particular emphasis on the pre-silicon phase, where RTL (Register Transfer Level) designs—typically written in Verilog or VHDL hardware description languages—are analyzed, and the post-silicon phase, which involves testing on actual hardware. Identifying and resolving implementation and integration bugs early in development leads to more efficient and effective fixes. Ideally, all bugs should be detected and addressed during the pre-silicon stage, before hardware fabrication begins.
Pre-silicon verification of RTL designs often relies on simulation-based testing, which is both time-consuming and computationally intensive. Simulations frequently fail after extended runtimes due to simple integration issues, such as misconfigured design parameters, incorrect strap input values, or faulty signal connections. These bugs are typically visible in the static RTL design and can be identified more efficiently through static RTL analysis than simulation or other dynamic validation methods.
What is Static RTL Analysis?
Static RTL analysis is a non-simulative technique used in pre-silicon validation to inspect RTL code for structural and semantic issues without simulating or emulating it. Static RTL analysis aims to replicate the transformative impact of static analysis on software code—enabling early detection of design issues, improving quality, and reducing costly downstream fixes. However, static RTL analysis differs significantly from static software code analysis. While software analysis typically deals with sequential, high-level logic and well-defined control flows, RTL analysis must handle highly parallel, clock-driven hardware descriptions with intricate timing and concurrency considerations. This makes static RTL analysis inherently more complex, requiring specialized tools and methodologies to interpret hardware semantics accurately.
Moreover, it is important to distinguish between static RTL analysis and RTL linting. RTL linting is a specific subset of static analysis that primarily enforces coding guidelines, style conventions, and detects basic syntactic or structural issues. In contrast, static RTL analysis takes a deeper approach. It performs semantic checks, examines design intent, and identifies functional and structural problems such as unreachable logic, dead code, and potential synthesis mismatches. In essence, linting serves as the first line of defense for maintaining code hygiene, while static RTL analysis offers a more comprehensive method for verifying design correctness and robustness. This early feedback loop reduces downstream debugging effort, enhances design quality, and accelerates time-to-market by catching issues before simulation or synthesis begins.
What can be Validated with Static RTL Analysis?
Static RTL analysis enables comprehensive validation of a wide range of design characteristics, including design parameters, hardwired signals, module interconnections, and the conditions and delays associated with signal paths. These foundational capabilities provide the basis for implementing more advanced and expressive validation scenarios. Unlike dynamic simulation, which explores only a limited subset of execution paths based on specific test stimuli, static RTL analysis exhaustively examines all possible structural and logical configurations of a design—without requiring simulation vectors.
Beyond basic structural checks, static RTL analysis can validate logic structures with potential security implications, such as finite state machines (FSMs). It can identify FSM components like state registers and transition logic, detect unreachable or redundant states and transitions, and verify structural completeness, including reset behavior, default transitions, and transition coverage.
Furthermore, static RTL analysis can effectively detect hardware-related CWEs (Common Weakness Enumerations) that manifest as structural or semantic flaws in RTL code. However, its static nature makes it less effective at identifying CWEs that depend on dynamic context or runtime behavior, such as side-channel vulnerabilities or speculative execution flaws. CWEs well-suited for detection through static analysis include, but are not limited to:
- CWE - CWE-1276: Hardware Child Block Incorrectly Connected to Parent System
- CWE - CWE-1192: Improper Identifier for IP Block used in System-On-Chip module (SOC)
- CWE - CWE-276: Incorrect Default Permissions
- CWE - CWE-1191: On-Chip Debug and Test Interface with Improper Access Control
- CWE - CWE-1271: Uninitialized Value on Reset for Registers Holding Security Settings
- CWE - CWE-1317: Improper Access Control in Fabric Bridge
- CWE - CWE-1254: Incorrect Comparison Logic Granularity
- CWE - CWE-1267: Policy Uses Obsolete Encoding
Various commercial pre-silicon validation tools incorporate static RTL analysis capabilities, including linting, structural checks, and connectivity verification. These tools typically support checks for generic RTL design issues related to block and signal reachability, array index mismatches, FSM states and transitions, combinational loops, and dead code—providing a robust foundation for early design validation and security assurance.
Does Intel use Static RTL Analysis?
Since 2015, Intel has been developing and using Cobra, an internally built static RTL analysis tool primarily aimed at validating the integration of debug security hardware components and default access control policies for on-die fabric targets. The first version of Cobra focused on basic rule-checking and presented RTL-derived design information in a human-readable format for manual inspection. Cobra 2 expanded capabilities by enabling signal tracing, supporting a broader range of debug hardware modules, and allowing user-defined validation rules through a plugin framework. These earlier versions generated spreadsheet outputs to assist in reviewing integration requirements for large-scale RTL designs, such as Intel’s server and client SoCs. However, this process had limitations, offering opportunities for improvement.
We developed Cobra 3, a next-generation static RTL analysis tool for pre-silicon validation to address these limitations. Cobra 3 incorporates lessons from previous versions and insights from analyzing multiple Intel SoC designs. It provides a framework for statically validating SoC- and RTL module-specific integration requirements in large-scale RTL environments. Cobra 3 goes beyond the capabilities of commercial pre-silicon validation tools, which are often limited to RTL linting and static checks for generic RTL design issues. By leveraging regular expressions to identify relevant RTL modules and signal names, Cobra 3 ensures seamless portability of validation tasks across projects with minimal or no modification. Designed for flexibility and extensibility, it supports a wide range of validation scenarios and integrates fully into continuous integration and development (CI/CD) workflows.
How Does Cobra 3 Work?
Cobra 3 comprises two primary components: the Extraction Module and the Analysis Module. The Extraction Module performs design exploration and signal connection tracing to extract relevant design information, as specified in its Extractor Configuration file. The Analysis Module provides core functionality to parse and access this extracted data. The Requirement Checker leverages these core functions to validate integration requirements. Cobra 3 outputs a pass/fail result that can be integrated into CI/CD workflows and generates a detailed report highlighting validation outcomes, including which tests failed and the reasons for failure
Cobra 3 provides templates for the Extractor Configuration and the Requirement Checker to streamline adoption. The Requirement Checker template includes essential features such as reading extracted RTL design data, generating log files, and producing validation reports. It also contains stubs for implementing custom requirement checks. The Extractor Configuration file defines labels for various pieces of design information. These labels help the Requirement Checker identify and validate the relevant data from the Extraction Module’s output. As a result, updates to the Extractor Configuration typically do not require changes to the Requirement Checker, and vice versa, promoting modularity and ease of maintenance.
How to Use Cobra 3?
A typical Cobra 3 workflow identifies the relevant validation requirements and the necessary design information, including RTL modules, signals, and parameters. Next, users create an Extractor Configuration file and prepare the RTL model to ensure that all required data—such as modules, signals, and parameters—is present and available for extraction. This RTL model is then analyzed using Cobra 3’s Extraction Module through a command-line interface for automation or interactively during test case development. Finally, users implement a Requirement Checker to evaluate the extracted data and verify compliance with the specified requirements. Generative artificial intelligence (GenAI) tools can support identifying validation requirements and design information, as well as implementing the Extractor Configuration file and Requirement Checker.
Once an Extractor Configuration and Requirement Checker are developed for a specific validation task, they are often highly reusable across different RTL designs with little to no modification. Over time, as Cobra 3 is adopted across various projects to validate a wide range of requirements, we envision building a shared repository of Extractor Configurations and Requirement Checkers. This internal database would allow validation teams at Intel to leverage existing work, significantly lowering the barrier to adoption. In most cases, adapting a validation task to a new project would only require minimal adjustments—such as fine-tuning the Extractor Configuration or applying RTL design-specific settings like black boxing.
How Effective is Cobra 3?
We demonstrated the applicability, effectiveness, and efficiency of Cobra 3 through a series of pilot projects targeting SoC-level validation tasks on a test chip RTL design exceeding 300 million gates.
In one pilot, Cobra 3 automatically validated integration requirements related to trigger input connections and delays across more than 500 RTL module instances distributed throughout the test chip. This task, which previously required several weeks of manual RTL review, was completed in approximately four minutes. The same validation task was successfully reused on another SoC design with only minor adjustments to the Extractor Configuration.
In another pilot, Cobra 3 validated Intel® Debug Protection Technology (Intel® DPT) integration requirements. Specifically, it analyzed signal connections between the State Aggregation and Privilege Generation Logic and over 400 target RTL modules that enforce access control to debug features and platform assets according to the Security Policy and Debug Capability Enabling Settings. These modules were distributed across all design partitions of the test chip, and the validation—covering constraints on source and destination signals, connection conditions, and signal delays—was completed in under 30 minutes.
A third pilot showcased Cobra 3’s scalability by validating access control policies for more than 93,000 test and debug register instances within a few hours. Traditional validation methods, such as simulation-based testing, are typically limited to small sample sets and cannot effectively scale to this volume.
Across all pilots, Cobra 3 successfully identified multiple integration bugs, underscoring its value as a powerful and scalable static RTL validation tool.
What’s Next?
Static RTL analysis has proven to be a powerful complement to traditional simulation-based validation, particularly for identifying structural and integration issues early in the design cycle. Tools like Cobra 3 and commercial static analysis solutions have shown that detecting integration and security flaws long before silicon is possible and practical for large-scale RTL designs. Yet, significant untapped potential remains.
As hardware designs grow increasingly complex and security requirements become more demanding, the need for scalable, early-stage validation techniques becomes critical. Many hardware CWEs (Common Weakness Enumerations) remain difficult to detect automatically, and further innovation is needed to generalize static analysis for security and functional verification across diverse SoC architectures. There are still many unexplored use cases—ranging from validating security requirements to verifying functional correctness—where static analysis could reduce reliance on manual reviews and late-stage testing. By complementing simulation and formal verification, static RTL analysis can also accelerate validation timelines.
Looking ahead, we see opportunities to shift static analysis even further left—toward real-time feedback during RTL development, much like how modern integrated development environments (IDEs) assist software engineers. Embedding static checks directly into the designer’s workflow could foster a culture of correctness and security by design, ultimately improving quality and speeding up development.
Realizing this vision will require more than just advanced tools—it will require a collaborative community. Building upon existing solutions and fostering collaboration and partnerships across industry and academia can lower the barrier to adoption and drive innovation. With broader community involvement, the vision of more intelligent, automated, and accessible RTL validation can become a reality—bringing us closer to secure and reliable hardware from the ground up.
About the Authors
Christian Wachsmann is an Offensive Security Researcher in Intel's INT31 team, specializing in hardware security—from automated validation and static RTL analysis to secure debug architectures. Previously a Platform Security Architect, he led the end-to-end security design for cellular modem chipsets and spearheaded efforts to secure Intel’s Edge, 5G, Bluetooth, and Wi-Fi platforms. He holds a postdoctoral degree from TU Darmstadt’s System Security Lab, where his research focused on hardware-rooted security and cryptographic protocols leveraging primitives such as Physical Unclonable Functions (PUFs).
Mohammad Mahdi Bidmeshki is an offensive security researcher in INT31 at Intel. His current research focuses on static and formal RTL analysis for security verification. Previously, he was a postdoctoral research associate at The University of Texas at Dallas, where his work contributed to various areas in hardware security, including proof-carrying hardware.