Authors:
Arun Kanuparthi, INT31
Hareesh Khattri, INT31
This post shares insights from two INT31 research initiatives on sharing knowledge about common security weaknesses in hardware designs. The first focuses on expanding the Common Weakness Enumeration (CWE) for hardware - a hardware hacking competition that encourages researchers from industry and academia to find and mitigate vulnerabilities in realistic open-source System-on-Chip (SoC) designs. Together, these projects showcase our approach to addressing the critical need for more research in hardware security assurance by driving improvements in tools and methods using open-source examples/ benchmarks of security vulnerabilities in hardware designs. We invite hardware security researchers to join and contribute to either of these efforts.
Introduction & Motivation
An important aspect of security research is analysis of published vulnerabilities to understand root causes and develop mitigations and detection techniques that can prevent similar issues in future designs. For software security, this is a well-established area through the large database of published Common Vulnerabilities and Exposures (CVEs). MITRE maintains Common Security Weaknesses(CWE) for software and similarly, the Open Worldwide Application Security Project (OWASP) publishes a list of top security risks, which are widely used in research and prioritization in web application security.
Our work on Intel products focuses on security issues in hardware and lower-layer firmware. In 2011, we began conducting systematic root cause analysis of identified security issues across our projects. Each year, we analyze issues for common weakness areas, to drive research priorities, architecture and design mitigations, security assurance tools, and process updates. We’ve noticed two key challenges:
- Limited external focus on hardware security issues: The external research community was focused on software security. Hardware issues we identified as significant were not receiving research attention externally. Even hardware security-focused conferences concentrated on areas like hardware trojan design and detection techniques, while we saw the need to prioritize research on detecting unintentional security weaknesses that exist in hardware designs.
- Lack of open-source hardware references: There was a shortage of open-source hardware designs that could serve as reference platforms for security issues that can exist in hardware description languages (HDL), like software coding languages and designs. In the software space, there was a large corpus of open-source projects, libraries of vulnerable code examples and capture-the-flag (CTF) modules used for training and improving detection tools and techniques. For hardware designs, while published vulnerabilities existed, specific code examples were not openly available. Any hardware hacking CTFs or training programs would be focused on targeting post-silicon hardware devices by attacking the exposed interfaces using physical attacks or side-channel attacks.
To address the first challenge, we collaborated with MITRE to expand the scope of weakness categorization to include hardware design. To tackle the second challenge, we collaborated with Texas A&M University and TU Darmstadt (Germany) to create Hack@DAC – a hardware hacking competition based on open-source hardware. We describe our approach in the following two sections.
Expanding the Common Weakness Enumeration for Hardware
CWE is a community-developed catalogue of security weaknesses first published by MITRE in 2006. The objective of the CWE scheme and database is to identify systemic patterns of weaknesses, enabling security researchers, product developers, and tool vendors to prioritize and focus on addressing general security weaknesses, rather than one-off defects and vulnerabilities. For this purpose, each new CVE vulnerability report includes a related CWE field, enabling researchers to trace each CVE back to its root-cause CWE.
CWEs were only available for software weaknesses until 2020. We partnered with MITRE to introduce a hardware design view and contributed to new hardware specific CWEs. The initial version of the hardware CWE design view was derived from common weaknesses that Intel research had developed internally based on root cause analysis of known hardware issues. To date, we have authored over 75 hardware CWE entries and continue to drive improvements through active participation in the Hardware CWE Special Interest Group (SIG) forum. Today, the hardware CWE SIG comprises security researchers from semiconductor design houses, design automation companies, and academia.
Since the introduction of hardware CWEs in 2020, we have made the following contributions to CWE content:
- Adding and updating hardware CWE entries through new submissions based on gaps we identify when mapping new CVEs to CWEs.
- Adding demonstrative examples for hardware CWEs by using either the published Hack@DAC models or adding specific code modules. For instance:
- Demonstrative code example for CWE-1299 (Missing Protection Mechanism for Alternate Hardware Interface), is a generic Verilog module example of buggy code.
- Demonstrative code example for CWE-1262 (Improper Access Control for Register Interface) shown in Figure 1 , refers to an inserted vulnerable code from Hack@DAC (2019) model of CVA6 RISCV-based SoC.
- Demonstrative code example for CWE-1232 (Improper Lock Behavior After Power State Transition), refers to and inserted vulnerable code from Hack@DAC'21 model of buggy OpenPiton.
- Demonstrative code example for CWE-1299 (Missing Protection Mechanism for Alternate Hardware Interface), is a generic Verilog module example of buggy code.
Figure 1 shows one example, where the power reset values assigned to software/firmware programmable fields are not secure and a power reset can be used to put the system into an insecure default state. The SoC design was updated to add a register locking feature that boot firmware can use to lock specific fields, ensuring that later stage firmware or applications cannot program the locked fields. The inserted vulnerability in this feature allows software to exploit an indirect attack by programing the reset controller to clear the register locks.
Most Important Hardware Weaknesses (MIHW)
In August 2025, MITRE released an updated list of Most Important Hardware Weaknesses. This list implements a data driven approach, improving upon the expert-polling analysis conducted for hardware CWEs in 2021. As part of this effort, we analyzed CVEs published since 2020, along with research papers, and security advisories from hardware device vendors to categorize the root causes of these published vulnerabilities. We restricted our analysis to hardware issues that are clearly root-caused in hardware design components. For vulnerabilities without a CVE, we performed root-cause analysis and then mapped them to existing categories in hardware CWEs. This data was then provided to the HW-CWE SIG team of domain experts for consideration in their selection of the most important hardware weaknesses.
Impact of Hardware CWE
Hardware CWE has seen significant adoption across the security research community. We note the following as the areas with the most impact.
- We see strong adoption of hardware CWEs by incident response teams at chip design companies when reporting CVEs. However, there is still room for improvement, as many CVEs do not clearly specify the root cause weakness, or they are mapped to incorrect CWEs. Additionally, descriptions of security issues in hardware components are often given in generic terms such as access control, denial-of-service to avoid disclosing the full design details of proprietary hardware designs.
- The ever-evolving threat landscape and the constantly improving corpus of hardware weakness types provides exposure to new kinds of weaknesses. This facilitates the improvement of in-house security assurance best practices and allows security architecture teams to plan for survivability scenarios in case of an escape found in silicon. With an improved security mindset, it becomes easier for functional verification teams to adopt security assurance practices.
- Several new tools for identifying classes of weaknesses can be developed (See the New Tools and Methodologies section below for a list of open-source tools proposed by academia). We see commercial tools publishing guides on how effective their security verification solution is against detecting several hardware CWEs
Hack@DAC – An Open‑Box Hardware Hacking Competition
Most hardware CTF competitions resemble “closed box” penetration tests, where participants attack finished devices via physical interfaces such as JTAG or side channel probes. These contests are valuable, but do not enable understanding of the security issues that arise as part of a hardware design. We wanted to create an open-source SoC project that could be used to share code examples and pre-silicon verification models that include known security issues. These could be used to share details of hardware security vulnerabilities.
In 2018, we started the Hack@DAC competition in collaboration with research teams from Technical University of Darmstadt, Texas A&M University, and the Synopsys Cloud team. This competition has been part of the Design Automation Conference (DAC) each year and has also been co-located with USENIX Security symposium and Cryptographic Hardware and Embedded Systems Workshop (CHES) for several years. Figure 2 shows the general flow and steps involved in organizing Hack@DAC.
Creating the Target Buggy Design
Organizing Hack@DAC involves selecting an opensource SoC as the competition target and then updating the design to inject security vulnerabilities.
Selection of CTF Target Base Model
We survey various open-source hardware designs and select one that implements a full SoC subsystem comprising CPU, peripherals, interconnect fabric, etc. Key factors that influence our choice of design include stability of the design (fully functional) as well as availability of support for hardware simulation (with open-source simulators) and FPGA prototyping.
These are the main base designs that have been used in various editions of Hack@DAC:
- PULPino: an open-source single-core microcontroller system 32-bit RISC-V core from the CVA6 project. This design supported an AXI bus interface to add other IPs.
- PULPissimo: like the PULPino design with the same RISC-V single core, but it added support for hardware co-processors and DMA engines. These were leveraged to add other peripherals to introduce security issues in DMA engines and cryptography accelerator IPs.
- OpenPiton: a project from Princeton’s Parallel processing research group, this design enabled using a 2-core design for the Hack@DAC with two RISC-V cores running in parallel at different privilege levels. Peripheral IPs designed for PULPino project were reused in the OpenPiton models.
- OpenTitan Earl Grey: an RV32IMCB RISC-V "Ibex" core-based design specifically developed as a hardware root-of-trust module. It has support for RISC-V ISA compliant privilege mode execution and Enhanced Physical Memory Protection (ePMP) features. We leveraged these and additional peripheral IPs of OpenTitan to add security issues. The OpenTitan based design has been used for our competitions since 2021.
In addition to the base SoC designs we have used other specific IPs from opencore projects mainly for cryptography engines and co-processor blocks. This process is depicted by Step 1 in Figure 2.
Adding Security Objectives & Features
Once the open-source design is chosen, we develop a threat model with the set of adversaries in scope and define what resources and interfaces are exposed to each adversary. The design is hardened by adding security features typically found in commercial chips, such as debug port protection and access control protection for various assets. This entails enabling supported security features in the design, integrating other open-source IPs (such as cryptographic hardware accelerators), or adding custom implementations.
For example, the projects we used as base SoCs did not have support for debug authentication features for JTAG or other supported debug modes. This functionality has been added to each of the models. While the RISC-V core ISA code execution privilege levels were supported in the IBEX RISC-V core, we have reused these features.
The software stack and boot flow of the SoC design have also been customized to use the added security features and create an unprivileged user software application level. The CTF participating teams can modify the software stack to load malicious software code to confirm identified vulnerabilities or demonstrate full exploits. Step 2 in Figure 2 illustrates this process.
Inserting Vulnerabilities
We use categories from CWE Hardware Design View to insert security vulnerabilities that map to different CWEs within the design. For each CWE, we select IPs and features in the design that can have that weakness and modify the code to inject security vulnerabilities. These vulnerabilities are inspired by real-world security vulnerabilities – from CVEs and security advisories (Step 3 in Figure 2). The inserted vulnerabilities also include variants and multiple instances added in different parts of the design. These variants are added to test whether all instances of an issue can be found using automation tools. This constitutes the buggy design which is provided to the participants during the competition (Step 4). Once the competition concludes, the models with the full list of inserted vulnerabilities are posted competition on the Hack-Events Github project.
Competition Stages:
Registered participants receive the buggy design (Step 5) and then:
- Phase 1 is remote, and participants have two months to familiarize themselves with the design, verification environment, detect vulnerabilities using existing tools, or create new tools to detect classes of bugs, or develop exploits. Participants submit bugs in a format that mimics responsible vulnerability disclosure,including CVSS scores, description of security impact, etc. (Step 6). We serve as judges and evaluate the bug submissions for validity of the issue, correctness of security impact, practicality of proposed mitigation, methodology used to detect the bug, etc. (Step 7). At design automation conferences, more points are awarded for new tools. At security conferences, more points are awarded for exploits. The best performing teams from Phase 1 are invited to the finals at the conference (Step 8).
- For Phase 2, we collaborate with Synopsys to host an updated buggy design on their cloud platform, which comes with the latest design automation and security verification tools (Step 9). This gives the participants an opportunity to operate security assurance tools used at semiconductor design companies. The finalists also have the option to work with their custom-developed tools in their own environment (such as on an FPGA board). Phase 2 runs for 48 hours and Steps 6 through 8 are repeated here. The top performing teams are crowned champions (Step 10) . Best performing teams also get to present their research and may be approached for potential opportunities (Step 11).
New Tools and Methodologies
The Hack@DAC framework has been extensively leveraged to develop new tools and methodologies to: automatically develop security test cases using LLMs; detect several classes of vulnerabilities; and patch vulnerabilities. Traditional security verification approaches such as simulation and formal verification, as well as novel approaches for security analysis such as static analysis, symbolic/concolic testing, hardware fuzzing, and information flow tracking have been proposed. Below is a selected list of some recent research by academia using the Hack@DAC framework. All these approaches work on RTL, potentially detecting security vulnerabilities before they escape to silicon. A full list of papers under each category can be found in the Blackhat Asia briefings recap.
- Security Test Case Generation and Bug Patching using LLMs
- Formal Verification
- Static Analysis
- Symbolic/ Concolic Testing
- Hardware Information Flow Tracking
- Hardware Fuzzing
Key Takeaways from Organizing Hack@DAC
When we started Hack@DAC, participants often came in with in-depth expertise in specific domains of hardware security and focused only on those areas. For example, teams that specialized in side-channel analysis and fault injection attacks would concentrate on those aspects of the target design. Initially, many teams concentrated on the modules that implemented cryptography algorithms or the RISCV Core processors.
Since the target designs have inserted vulnerabilities in different areas, participating teams that included members who could cover different parts of SoC design have generally performed better than those focusing on specific areas.
University teams from New York University, University of Texas at Dallas, University of Florida, Vrije University Amsterdam, Purdue University, and more participated in the competition in multiple years, and over time developed strategies to evaluate full SoCs. Participants have shared their experience at Hack@DAC in this video.
We also observed progress in the usage of better automation and tools for hardware security assurance over the years. For the first few years of the competition, the teams used manual code reviews combined with simulation test for identifying issues. Since 2021, we have seen teams using formal verification and fuzzing tools. In the last two years we have seen most of the teams using AI tools for all submitted issues. Initially teams used public LLMs like ChatGPT to analyze the design and code. This year most of the participating research teams used their custom trained LLMs working on a general AI assisted flow for security assurance of any given SoC design.
Impact and Call to Action
Expanding the hardware CWE and organizing Hack@DAC are part of our broader commitment to open research into the security of hardware designs. Through Hack@DAC, we have worked to release SoC designs with known security vulnerabilities that provide concrete examples of hardware security weaknesses described in CWE entries and inspired by confirmed vulnerabilities reported in products and in research papers.
Key results from this work so include publication and extensive use of MITRE Hardware CWE; and use of the Hack@DAC models and code examples by tool vendors and researchers to test and prove new innovations in hardware security assurance.
Security is a team sport. By openly sharing weaknesses and providing a forum to practice finding them, we can raise the bar for hardware security across the ecosystem. We would like to invite any interested teams and researchers to join either of these projects. Teams can submit requested to join the Hardware CWE SIG forum at cwe@mitre.org and for Hack@Event, track the announcements for upcoming competitions at DAC or other conferences at https://hackthesilicon.com/.
Acknowledgements
Hack@DAC is a collaborative effort between Intel, Texas A&M University, TU Darmstadt, and Synopsys. For the most recent edition of the competition, we would like to acknowledge the contributions of Jason Fung (Intel), Prof. Jeyavijayan ‘JV’ Rajendran, Rahul Kande, Chen Chen, Stephen Muttathil (Texas A&M University), Prof. Ahmad-Reza Sadeghi, Mohamadreza Rostami (TU Darmstadt), Shylaja Sen, Yann Antonioli, Pamela McDaniel, and Archana Varanasi (Synopsys). We would also like to acknowledge alumni from SETH Lab at Texas A&M University and System Security Lab (TU Darmstadt) who directly contributed to past competitions. We would like to thank the numerous participants from around the world who participated in the competitions since 2018.
Share Your Feedback
We want to hear from you. Send comments, questions, and feedback to the INT31 team.
About the Authors
Arun Kanuparthi is a Principal Engineer and Offensive Security Researcher in Intel’s INT31 team, where he leads the offensive security research efforts on multiple Intel products. His research interests include hardware and system security, vulnerability root causing and mitigations, hardware security assurance, and formal verification for security. Arun obtained his PhD in Electrical Engineering from New York University.
Hareesh Khattri is a Principal Engineer and security researcher in Intel’s INT31 team. He has worked in hardware and platform security validation and research across multiple Intel technologies since 2006. His current focus is on Intel Xeon server designs and server platform security. Hareesh holds an M.S. in Electrical and Computer Engineering from North Dakota State University.
References
- MITRE, “CWE hardware design view”, Common Weaknesses Enumeration view for hardware designs maintained and by HW CWE Special Interest Group.
- Hack the silicon CTFs.
- Real Intent, “Sentry – Hardware Security Sign Off”.
- Cycuity, “Radix Coverage for Hardware Common Weakness Enumeration (CWE) Guide”.
- Kanuparthi, A., Khattri, H., Fung, J., Sadeghi, A. R., & Rajendran, J. (2024, April). The Hack@DAC Story: Learnings from Organizing the world's largest hardware security competition: Briefings Blackhat Asia.