Difference between revisions of "System Hardware Assurance"

From SEBoK
Jump to navigation Jump to search
m (Text replacement - "<center>'''SEBoK v. 2.6, released 20 May 2022'''</center>" to "<center>'''SEBoK v. 2.7, released 31 October 2022'''</center>")
 
(51 intermediate revisions by 4 users not shown)
Line 1: Line 1:
 
----
 
----
  
'''''Lead Author: ''' Elizabeth McDaniel''
+
'''''Authors: ''' Michael Bear, Donald Davidson, Shawn Fetterolf, Darin Leonhardt, Daniel Radack, Karen Johnson, Elizabeth A. McDaniel   '''Contributors:'''  Michael Berry, Brian Cohen, Diganta Das, Houman Homayoun, Thomas McDermott''
  
 
----
 
----
 
This article describes the discipline of hardware assurance, especially as it relates to systems engineering. It is part of the [[Systems Engineering and Quality Attributes | SE and Quality Attributes]] Knowledge Area.
 
This article describes the discipline of hardware assurance, especially as it relates to systems engineering. It is part of the [[Systems Engineering and Quality Attributes | SE and Quality Attributes]] Knowledge Area.
  
===Overview===
+
==Overview==
  
Hardware assurance is a set of system security engineering activities undertaken to improve the confidence that electronics function as intended and only as intended throughout their life cycle, and to manage identified risks. The term hardware refers to the microelectronic components, sometimes called integrated circuits, and other electronic components. These hardware components, the products of complex processes involving multiple stages of design, manufacturing, and post-manufacturing, must properly functioning under all circumstances. Here the term "assurance" refers to the set of activities undertaken to improve the confidence that the hardware functions as intended, and only as intended.  Hardware components alone, and when integrated into subcomponents, subsystems, and systems, may have weaknesses and vulnerabilities that offer avenues for exploitation. Hardware risks can be differentiated as weaknesses - flaws, bugs, or errors in design, architecture, code, or implementation, and vulnerabilities that are exploitable weaknesses in the context of use.   
+
{{Term|system hardware assurance (glossary)|System hardware assurance}} is a set of system security engineering activities (see [[System Security]] for more information) undertaken to quantify and increase the confidence that electronics function as intended and only as intended throughout their life cycle, and to manage identified risks. The term ''hardware'' refers to electronic components, sometimes called integrated circuits or chips. As products of multi-stage processes involving design, manufacturing and post-manufacturing, packaging, and test, they must function properly under a wide range of circumstances. Hardware components alone and integrated into subcomponents, subsystems, and systems have weaknesses and vulnerabilities enabling exploitation. Weaknesses are flaws, bugs, or errors in design, architecture, code, or implementation. Vulnerabilities are weaknesses that are exploitable in the context of use (Martin 2014).   
  
Hardware assurance approaches are designed to lessen these risks. Consequences of risk that are not mitigated may include adversary exploitation and subversion of system functionality, counterfeit production of components, and loss of technology advantage for the military and private sector organizations.  
+
Hardware assurance is conducted to minimize risks related to hardware that can enable adversarial exploitation and subversion of functionality, counterfeit production, and loss of technological advantage.  Challenges include increasing levels of sophistication and complexity of hardware architectures, integrated circuits, operating systems, and application software, combined with supply chain risks, emergence of new attack surfaces, and reliance on global sources for some components and technologies.  
Perhaps the most comprehensive recent standard textbooks on Graphical Information Systems (GIS) are Longley et al. (2015) and Kresse and Danko (2012). Beyond these two textbooks, there are many others on GIS and respective spatial data capture procedures (surveying, photogrammetry, and remote sensing) and management applications. Tomlinson (2019) and Peters (2012) as well as the online successor to this text book, [http://www.wiki.gis.com/wiki/index.php/System_Design_Strategies System Design Strategies], provide valuable insights into aspects of how to set up a GIS system. While Tomlinson (2019) looks more at the management perspective and processes of implementing a GIS, Peters (2012) focuses more on the technical aspects.
 
  
Domain-specific GIS applications are also documented in numerous textbooks. Domain areas include agriculture and forestry, insurance economics and risk analysis, simulation and environmental impact analysis, hydrology, archaeology, ecology, crime investigation and forensics, disaster management and first responders, marketing, municipalities and cadaster, land administration and urban planning, utility sectors, telecommunications, smart cities and military applications. The latter include Command and Control (C2) systems, or are even extended in Command, Control, Communications, Computers Intelligence, Surveillance and Reconnaissance (C4ISR) systems. Generally speaking, wherever data, i.e. information to events that occur, is displayed, processed and/or analyzed in a geospatial context, there is GIS technology involved, whether the user interface is based on a web interface, a desktop client, or a mobile device such as a smartphone or tablet computer. To provide visualization of this data it is necessary to have a spatial context for orientation, geospatial data like digital topographic maps, or geospatial imagery, digital elevation model, etc. Beyond such classical geospatial data, other types of data are also often used, such as meteorological and other environmental data.
+
After identifying concerns and applicable mitigations, hardware assurance offers a range of possible activities and processes. At the component level, hardware assurance focuses on the hardware itself and the supply chain used to design and manufacture it; at the subcomponent, subsystems, and system levels, hardware assurance incorporates the software and firmware integrated with the component.  
  
Interoperability is of major concern in geospatial technology. The [http://www.opengeospatial.org Open Geospatial Consortium] (OGC) is probably the most relevant organization that deals with GIS and sensor systems interoperability. The OGC has published a dedicated set of [http://www.opengeospatial.org/standards interface specification standards] on their topics.
+
Engineering efforts to enhance trust in hardware have increased in response to complex hardware architectures, the increasing sophistication of adversarial attacks on hardware, and globalization of supply chains. These factors raise serious concerns about the security, confidentiality, integrity, and availability as well as the provenance and authenticity of hardware. The “root of trust” (NIST 2020) of a system is typically contained in the processes, steps, and layers of hardware components and across the systems engineering development cycle. System hardware assurance focuses on hardware components and their interconnections with software and firmware to reduce risks to proper function or other compromises of the hardware throughout the complete life cycle of components and systems. Advances in hardware assurance tools and techniques will strengthen designs, and enhance assurance during manufacturing, packaging, test, and deployment and operational use.
  
===Positioning, Navigation and Timing===
+
==Life Cycle Concerns of Hardware Components==  
  
The previous description of geospatial technology focused mostly on stationary objects, i.e. non-moving geospatial data. This section is mainly concerned with objects moving in space and with derived applications such as navigation, monitoring, and tracking such objects. The basic operations needed are positioning and navigation. Certainly, the majority of people using smartphones are also using various location based services (LBS) that are provided in conjunction with GIS databases and services, such as Google Maps, a well-known online GIS application. As a cons
+
Hardware assurance should be applied at various stages of a component’s life cycle from hardware architecture and design, through manufacturing and testing, and finally throughout its inclusion in a larger system. The need for hardware assurance then continues throughout its operational life including sustainment and disposal.
  
 +
As semiconductor technology advances the complexity of electronic components, it increases the need to “bake-in” assurance. Risks created during architecture, design, and manufacturing are challenging to address during the operational phase. Risks associated with interconnections between and among chips are also a concern. Therefore, improving a hardware assurance posture must occur as early as possible in the life cycle, thereby reducing the cost and schedule impacts associated with “fixing” components later in the life cycle of the system.
  
 +
A conceptual overview of the typical hardware life cycle (Figure 1) illustrates the phases of the life cycle of components, as well as the subsystems and systems in which they operate. In each phase multiple parties and processes contribute a large set of variables and corresponding attack surfaces. As a result, the potential exists for compromise of the hardware as well as the subcomponents and systems in which they operate; therefore, matching mitigations should be applied at the time the risks are identified.
  
The objective of hardware assurance is to prevent loss, damage, and other compromises to the intended functionality of the components themselves and when they become integral to subsystems and systems. Depending on identified concerns and desired mitigations, hardware assurance offers a wide range of combinations of possible activities and processes. At the component level, hardware assurance focuses on the hardware itself, when and where it was designed and manufactured, and its supply chain to delivery. Hardware assurance also focuses on hardware components when they are integrated and connected to subcomponents, components, subsystems, and systems because when components are connected to other components executing software and firmware and operating as part of systems the assurance of their security is essential. 
+
[[File:Component_Lifecycle_(rev_a)_-_MJB_ver2.jpg|thumb|center|800px|'''Figure 1. Component Life Cycle.''' (SEBoK Original)]]
  
The drivers of hardware assurance are growing concerns over the confidentiality, integrity, and availability of individual integrated circuits and their interconnections into circuits. These concerns have grown with the increasing sophistication and complexity of hardware architectures, integrated circuits, operating systems and application software, with considerations of supply chain risks, emergence of new attack surfaces, and reliance on globalized sources for some components and technologies. The root of trust of a system is typically contained in the hardware component with consequences associated with hardware attacks very high in the system in which they are embedded’s processes, steps, and layer. and across the systems engineering development cycle.  The consequences of hardware vulnerabilities have direct analogs to the consequences of cyber and software-based vulnerabilities that are well reported. Fundamentally, hardware assurance focuses to the hardware component itself as well as the component’s interconnections with software and firmware, and other components in order to have full assurance at the system level.
+
Both the value of the hardware component and the associated cost of mitigating risks increase at each stage of the life cycle. Therefore, it is important to identify and mitigate vulnerabilities as early as possible. It takes longer to find and fix defects later, thereby increasing the complexity of replacing hardware with “corrected” designs that create system integration issues. In addition to cost savings, early correction and mitigation avoid delays in creating an operational system. It is essential to re-assess risks associated with hardware components throughout the life cycle periodically, especially as operational conditions change.  
  
Hardware assurance activities can be applied to components during their lifecycle to reduce the likelihood of risks to proper function or other potential compromises of the hardware.  It is likely that new tools and techniques for hardware assurance will be developed in the future to strengthen designs, and new methods will enhance assurance during complex manufacturing, packaging and test, and deployment.  The more critical the component in the context of its use and the more critical the system in which it is used dictate the necessary degree of hardware assurance to address the possible operational risks. It is essential to assess risks associated with hardware components in all phases of the component and system’s life cycle periodically.
+
Hardware assurance during system sustainment is a novel challenge given legacy hardware and designs with their associated supply chains. In long-lived high-reliability systems, hardware assurance issues are compounded by obsolescence and diminished sourcing of components, thereby increasing concerns related to counterfeits and acquisitions from the gray market.  
  
 +
==Function as Intended and Only as Intended==
 +
Exhaustive testing can check system functions against specifications and expectations; however, checking for unintended functions is problematic. Consumers have a reasonable expectation that a purchased product will perform as advertised and function properly (safely and securely, under specified conditions) – but consumers rarely consider if additional functions are built into the product. For example, a laptop with a web-conferencing capability comes with a webcam that will function properly when enabled, but what if the webcam also functions when turned off, thereby violating expectations of privacy? Given that a state-of-the-art semiconductor component might have billions of transistors, “hidden” functions might be exploitable by adversaries. The statement “function as intended and only intended” communicates the need to check for unintended functions.
  
Life Cycle Concerns of Hardware Components
+
Hardware specifications and information in the design phase are needed to validate that components function properly to support systems or missions. If an engineer creates specifications that support assurance that flow down the system development process, the concept of “function as intended” can be validated for the system and mission through accepted verification and validation processes. “Function only as intended” is also a consequence of capturing the requirements and specifications to assure the product is designed and developed without extra functionality. For example, a Field Programmable Gate Array (FPGA) contains programmable logic that is highly configurable; however, the programmable circuitry might be susceptible to exploitation.  
Aspects of hardware assurance may be applied at various stages of a component’s life cycle (See Figure 1) that extends from hardware concept development and design processes, through manufacturing and associated processes, testing and distribution channels, and finally throughout the use in the larger electronic system. The need for hardware assurance continues throughout its operational life including sustainment and disposal.
 
The complexity of electronic components increases as semiconductor technology advances; therefore the need to “bake-in” assurance increases.  Risks can be created during design, and it may not be possible to mitigate all of them externally during operation.  Risks can also increase as components are incorporated into systems where interconnections among chips may provide new avenues of concern. Improving hardware assurance posture as early as possible in the life cycle also reduces cost and schedule impacts to “fix” components later in the life cycle.
 
A generalized and conceptualized overview of the typical hardware life cycle (Figure 1) illustrates the phases of the life cycle of components, as well as the subsystems and systems in which they operate.  In each phase multiple parties and processes are involved, thereby contributing to a very large set of variables and corresponding attack surfaces. At every stage the potential exists for compromise of the hardware as well as subcomponents and systems in which they operate; therefore, matching mitigations must be identified and applied.
 
Overview of Hardware Life Cycle
 
  
The value of the hardware component increases at each stage of the life cycle, so it is important to identify and mitigate weaknesses to address assurance as early as possible. In addition to cost savings, early correction and mitigation avoids delays in creating an operational system. It typically takes longer to find and fix defects later, and this can greatly add in complexity to replace hardware with “corrected” designs.  
+
Given the specifications of a hardware component, specialized tools and processes can be used to determine with a high degree of confidence whether the component’s performance meets specifications. Research efforts are underway to develop robust methods to validate that a component does not have capabilities that threaten assurance or that are not specified in the original design. Although tools and processes can test for known weaknesses, operational vulnerabilities, and deviations from expected performance, all states of possible anomalous behavior cannot currently be determined or predicted.  
  
Hardware assurance during sustainment is also a novel challenge given legacy hardware and designs with their associated supply chains and acquisition.  In long-lived high reliability systems, hardware assurance issues are compounded by obsolescence and diminished sourcing. The risks of counterfeits and acquisitions from the gray market are among the concerns.
+
Data and information can be used to validate the component’s function and should be collected from multiple sources including designers, developers, and members of the user community. Designers and developers can provide deep understanding of the component’s intended function and provide tests used to verify its functional performance before fielding. The merging of component design and development information with extensive field data, including third-party evaluation, contributes to assurance that the component is performing specified functions and that no unintended functionality is observed. 
 
Function as Intended and Only as Intended
 
Exhaustive testing can be used to check system functions against specifications and expectations, but checking for unintended functions is problematic. Consumers of products have a reasonable expectation that a purchased product will perform as advertised/indicated and function properly(safely and securely, under specified conditions), but they rarely consider if any additional functions are built into the product.  For example, a laptop with a face-to-face web-conferencing capability comes with a webcam that will function properly when enabled.  But what if it functions when supposedly turned off; thereby violating expectations of privacy?  Given that a state-of-art semiconductor die might have billions of transistors, it is theoretically possible that “hidden” functions might be exploited by adversaries. The statement “function as intended and only intended” communicates the concept.
 
  
Hardware assurance typically involve stages and layers of activities in a manner similar to cybersecurity. Hardware specifications and information in the design phase are needed so components can be later validated to perform function properly for systems or missions.  If an engineer creates specifications that support assurance that flow down the system development process, the concept of function as intended can be validated for the system and mission through accepted verification and validation processes. Function only as intended is a consequence of capturing the requirements/specifications so the product is designed and developed without extra functionality.  For example, an FPGA contains much programmable functionality to perform in a highly flexible manner; however, the programmable circuitry might be susceptible to exploitation by knowledgeable persons.
+
==Risks to Hardware==
Given specifications of a hardware component, select tools and processes can be used to determine that the component’s performance meets specifications, with a high degree of confidence.  Research efforts are underway to develop robust methods to validate that a component does not have capabilities that threaten assurance and are not specified in the original design. While select tools and processes can test for known weaknesses, operational vulnerabilities and deviations from expected performance characteristics/behavior, all states of possible anomalous behavior cannot be determined or predicted. It is possible to determine the presence of known weaknesses/vulnerabilities and to document recommended levels of monitoring, diagnostics and mitigations with some degree/percentage of coverage or confidence level that there are likely no unintended functions in the hardware component.
+
Modern systems depend on complex microelectronics, but advances in hardware without attention to associated risks can expose critical systems, their information, and the people who rely on them. “Hardware is evolving rapidly, thus creating fundamentally new attack surfaces, many of which will never be entirely secured”. (Oberg 2020)  Therefore, it is imperative that risk be modeled through a dynamic risk profile and be mitigated in depth across the entire profile. Hardware assurance requires extensible mitigations and strategies that can and do evolve as threats do. Hardware assurance methods seek to quantify and improve confidence that weaknesses that can become vulnerabilities that create risks are mitigated.
Three entities can test the hardware component and provide data for such assurance consideration: the designer, developer, and provider community can provide a suitably complete description of design, fabrication data and meta-data, verification-and-validation, and can acquire data in design and manufacture.  When the provider collects such data s/he can create an assurance case for the acquirer: the acquirer, consumer, and user community can conduct acceptance testing, both in static and dynamic situations in a real or simulated operational environments (much like software-testing and operational testing of systems); and the provider and/or acquirer can solicit 3rd party test/evaluation to collect independent data that there no unintended functionality was found in the component.  Ideally more than one source of data informs this assurance/confidence level.
 
  
Risks to Hardware
+
Most hardware components are commercially designed, manufactured, and inserted into larger assemblies by multi-national companies with global supply chains. Understanding the provenance and participants in complex global supply chains is fundamental to assessing risks associated with the components.  
Modern systems depend on complex microelectronics but advances in hardware without attention to associated risks can expose critical systems, their information, and the people who rely on them. “Hardware is evolving rapidly, thus creating fundamentally new attack surfaces, many of which will never be entirely secured.”  In this way hardware assurance mirrors cybersecurity because both require mitigations and strategies that evolve as threats do so. Hardware assurance methods seek to raise confidence in the hardware to mitigate known or expected weaknesses or vulnerabilities.
 
Most hardware components are commercially designed, manufactured, and then inserted into larger assemblies by multi-national companies with global supply chains. Understanding the provenance and participants in complex global supply chains of components is fundamental to assessing risks associated with the components
 
Operational risks that derive from unintentional or intentional features are differentiated based on the source of the feature.  Three basic operational risk areas relate to goods, products, or items: failure to follow meet quality standards, maliciously tainted goods, and counterfeit hardware.  Counterfeits are usually offered as legitimate products, but they are not. They may be refurbished items, mock items made to appear as the originals, re-marked products, or the product of overproduction/substandard production items that the legitimate producer did not intend to go on the market.  The impact of counterfeits include …. 
 
Failure to follow quality standards, that include safety and security standards, especially in design, can result in unintentional features or flaws being inadvertently introduced through mistakes, omissions, or lack of understanding about features that might be manipulated by future users for their nefarious purposes. Features introduced intentionally into hardware for specific purposes make them susceptible to espionage or control of the hardware at some point in its life cycle.
 
Improve the Confidence
 
One of the key technical challenges associated with hardware assurance is the development of quantifiable metrics and measurements for concepts such as trust and assurance.  While quantification is challenging because of the complex interplay between human designers, manufacturing and supply chains, and adversarial intent, it is important so that hardware risks can be identified and managed within program budgets and timeframes.  Quantification enables a determination of the required level of hardware assurance, and whether it is successfully achieved throughout the hardware’s lifecycle.
 
  
Quantification of hardware assurance begins with a system-level assessment and ranking of hardware by risks and consequences.  Criteria for conducting the hardware risk assessment can be based on factors such as criticality of the hardware to system operation or consequence of technology loss by reverse engineering and intellectual property theft.
+
Operational risks that derive from unintentional or intentional features are differentiated based on the source of the feature. Three basic operational risk areas related to goods, products, or items are: failure to meet quality standards, maliciously tainted goods, and counterfeit hardware. Counterfeits are usually offered as legitimate products, but they are not. They may be refurbished or mock items made to appear as originals, re-marked products, the result of overproduction, or substandard production parts rejected by the legitimate producer. Counterfeit risks and substandard quality offer avenues for malware insertion and potential impacts to overall system performance and availability.  
  
Current methods for quantifying hardware risk, trust, and assurance emerged from quality and reliability engineering, which rely on methods like Failure Mode and Effects Analysis (FMEA). FMEA, semi-quantitative in nature, relies on a combination of probabilistic data for hardware failure and input from subject matter experts. Adapting FMEA to quantify hardware assurance is hampered when assigning probabilities to human behaviors motivated by economic incentives, malicious intent, etc.  Opinions of experts vary when assigning numeric values and weighting factors used in generating risk matrices and scores; consensus processes can help but are not always perfect.
+
Failure to follow quality standards including safety and security standards, especially in design, can result in unintentional features or flaws being inadvertently introduced. These can occur through mistakes, omissions, or lack of understanding how features might be manipulated by future users for nefarious purposes. Features introduced intentionally for specific purposes can also make the hardware susceptible to espionage or control of the hardware at some point in its life cycle.
  
 +
== Quantify and Improve Confidence   ==
 +
The quantification of hardware assurance is a key technical challenge because of the complex interplay among designer, manufacturer and supply chains, and adversarial intent, as well as the challenge of defining “security” with respect to hardware function. Quantification is necessary to identify and manage hardware risks within program budgets and timeframes. It enables a determination of the required level of hardware assurance and whether quantification is achievable throughout the hardware’s life cycle.
  
Manage Risks
+
Current methods for quantifying hardware assurance are adapted from the fields of quality and reliability engineering, which use methods like Failure Mode and Effects Analysis (FMEA). (SAE 2021) FMEA is semi-quantitative and combines probabilistic hardware failure data and input from experts. Adapting FMEA to quantify hardware assurance is hampered when it relies on assigning probabilities to human behavior that may be motivated by money, malicious intent, etc. Expert opinion often varies when quantifying and weighting factors used in generating risk matrices and scores. In response, recent efforts are attempting to develop quantitative methods that reduce subjectivity.
The selection of specific components for use in subsystems and systems should be the outcome of performance-risk-cost-benefit trade-off assessments in their intended context of use. The goal of risk management and mitigation planning is to select mitigations with the best overall operational risk reduction and the lowest cost impact. During the life cycle of a system - architecture, design, code, or implementation - various types of problems can pose risks to the operational functionality of the hardware components provided.  These include weaknesses (or defects) that are inadvertent (unintentional), counterfeits that are accidental (unintentional) or intentional, e.g., for financial motivations and/or malicious components designed to change functionality(intentional).
 
The purpose of managing risk in the context of hardware assurance is to decrease the risk of weaknesses that can be exploited and increased in the attack surface, while increasing confidence that an implementation resists exploitation. Ideally, risk management eliminates risk and maximizes assurance to an acceptable level. Often, risks are considered in the context of likelihood of consequences and the costs and effectiveness of mitigations.
 
However, new operationally impactful risks are recognized continuously over the hardware life cycle and supply chains of components. Further, hardware weaknesses are often exploited through software or firmware. As such, to maximize assurance and minimize operationally impactful risks, mitigation in depth across all constituent components must be considered.
 
An example of a mitigation to a hardware weakness is the use of programmable logic. Through programmable logic, when a new attack surface is identified, a new configuration for the programmed logic can be loaded to protect the hardware through configurability and adaptability. In this case, the programming functions must be assured such that they cannot be exploited for unintended purposes.  In this case, a dynamic risk profile highlights the need for flexibility in hardware configuration to provide extensible mitigation. Specifically, a dynamic risk profile highlights the need to reduce the susceptibility of hardware to obsolescence-related risks and weaknesses over its life cycle. Similarly, such an extensible mitigation provides the means to mitigate defects discovered post-fabrication.
 
Just as with software patches and updates, new attack surfaces on hardware may become exposed through the mitigation being applied, but they will likely take a long time to discover.  In the example above, the programmable logic is updated to provide a new configuration to protect the hardware. In this context, access to hardware reconfiguration must be limited to authorized parties to prevent an unauthorized update that introduces weaknesses on purpose. While programmable logic may have mitigated a specific attack surface or type of weakness, additional mitigations are needed to minimize risk more completely. This is mitigation-in-depth – multiple mitigations building upon one another.
 
Throughout the entire supply chain, critical pieces of information can be inadvertently exposed. The exposure of such information directly enables the creation and exploitation of new attack surfaces. Therefore, the supply chain infrastructure must also be aware of weaknesses and work to protect the creation, use, and maintenance of hardware components the dynamic risk profile offers a framework to balance mitigations in the context of risk and cost throughout the complete hardware and system life cycles.
 
  
Current Research
+
Game theoretic analysis (game theory) is the creation of mathematical models of conflict and cooperation between intelligent and rational decision-makers. (Myerson 1991) Models include dynamic'','' as opposed to static, interactions between attackers and defenders that can quantify the risks associated with potential interactions among adversaries, hardware developers, and manufacturing processes. (Eames and Johnson 2017) Creation of the models forces one to define attack scenarios explicitly and to input detailed knowledge of hardware development and manufacturing processes. Outputs of the model may include a ranking of the most likely attacks to occur based on cost-benefit constraints on the attackers and defenders. (Graf 2017) The results can empower decision-makers to make quantitative trade-off decisions about hardware assurance.
  
Current efforts seek to move from compliance-based systems to risk-based systems to support mitigation-in-depth in situations when compromises are needed to address the increasing complexity of hardware components, intellectual property of hardware interconnected with software and firmware, and approaches.  Promising approaches include game theory analysis, use of confidence intervals for detecting counterfeit defects, and distributed ledger technology to hardware manufacturing data to create an immutable record for component provenance and traceability. Efforts are underway to articulate new standards for hardware assurance and methods to leverage quantifiable data to make inform critical system engineering trades.  
+
Another quantification method that results in a confidence interval for detecting counterfeit/suspect microelectronics is presented in the SAE AS6171 standard. (SAE 2016) Confidence is based on knowing the types of defects associated with counterfeits, and the effectiveness of different tests to detect those defects. Along the same lines, a standard for hardware assurance might be developed to quantify the confidence interval by testing against a catalogue of known vulnerabilities, such as those documented in the MITRE Common Vulnerabilities and Exposures (CVE) list. (MITRE 2020)
  
 +
Distributed ledger technology (DLT) is an example of an emerging technology that could enable a standardized approach for quantifying hardware assurance attributes such as data integrity, immutability, and traceability. DLT can be used in conjunction with manufacturing data (such as dimensional measurement, parametric testing, process monitoring, and defect mapping) to improve tamper resistance using component provenance and traceability data. DLT also enables new scenarios of cross-organizational data fusion, opening the door to new classes of hardware integrity checks.  
  
References 250 words
+
==Manage Risks==
 +
The selection of specific components for use in subsystems and systems should be the outcome of performance-risk and cost-benefit trade-off assessments in their intended context of use. The goal of risk management and mitigation planning is to select mitigations with the best overall operational risk reduction and the lowest cost impact. The required level of hardware assurance varies with the criticality of a component's use and the system in which it is used.  
 +
 
 +
During a typical development life cycle of a system architecture, design, code, and implementation – various types of problems can pose risks to the operational functionality of the hardware components provided. These risks include weaknesses or defects that are inadvertent (unintentional), as well as counterfeits that may be either inadvertent or intentionally injected into the supply chain for financial motivations or malicious components designed to change functionality.
 +
 
 +
Managing risk in the context of hardware assurance seeks to decrease the risk of weaknesses that create attack surfaces that can be exploited, while improving confidence that an implementation resists exploitation. Ideally, risk management reduces risk and maximizes assurance to an acceptable level. Often, risks are considered in the context of likelihood of consequences and the costs and effectiveness of mitigations. However, new operationally impactful risks are recognized continuously over the hardware life cycle and supply chains of components. At the same time hardware weaknesses are often exploited through software or firmware. Therefore, to maximize assurance and minimize operationally impactful risks mitigation-in-depth across all constituent components must be considered. This highlights the need for a dynamic risk profile.
 +
 
 +
An example of a post-manufacturing mitigation involves a new hardware risk identified during field operation. A dynamic risk profile can be used to characterize the issue and identify possible resources to address the suspect component function. This profile can also be used to track and address risks throughout its life, including obsolescence-related risk. One means of mitigating this kind of hardware life cycle risk is the use of existing programmable logic.
 +
 
 +
Just as with software patches and updates, new attack surfaces on hardware may become exposed through the mitigation being applied, and they will likely take a long time to discover. In the example above, the programmable logic is updated to provide a new configuration to protect the hardware. In this context, access to hardware reconfiguration must be limited to authorized parties to prevent an unauthorized update that introduces weaknesses on purpose or by accident. While programmable logic may have mitigated a specific attack surface or type of weakness, additional mitigations are needed to minimize risk more completely. This is mitigation-in-depth – multiple mitigations building upon one another.
 +
 
 +
Throughout the entire supply chain, critical pieces of information can be inadvertently exposed. The exposure of such information directly enables the creation and exploitation of new attack surfaces. Therefore, the supply chain infrastructure must also be assessed for weaknesses, and the development, use, and maintenance of hardware components assured.  The dynamic risk profile offers a framework to balance mitigations in the context of risk and cost throughout the complete hardware and system life cycles.
  
 
==References==
 
==References==
  
 
===Works Cited===
 
===Works Cited===
Add
+
Eames, B.K. and M.H. Johnson. 2017. “Trust Analysis in FPGA-based Systems.” Proceeding of GOMACTech 2017, March 20-23, 2017, Reno, NV.
 +
 
 +
Graf, J. 2017. “OpTrust: Software for Determining Optimal Test Coverage and Strategies for Trust.” Proceedings of GOMACTech 2017, March 20-23, 2017, Reno, NV.
 +
 
 +
Martin, R.A. 2014. “Non-Malicious Taint: Bad Hygiene is as Dangerous to the Mission as Malicious Intent.” CrossTalk Magazine''.'' 27(2).
 +
 
 +
MITRE. 2020. “Common Vulnerabilities and Exposures.” Accessed March 31, 2021. Last Updated December 11, 2020. Available: https://cve.mitre.org/cve/
 +
 
 +
Myerson, R.R. 1991. ''Game Theory: Analysis of Conflict''.  Cambridge, MA: Harvard University Press.
 +
 
 +
NIST. 2020. Roots of Trust. Accessed March 31, 2021. Last Updated June 22, 2020. Available: https://csrc.nist.gov/projects/hardware-roots-of-trust
 +
 
 +
Oberg, J. 2020. Reducing Hardware Security Risk. Accessed March 31, 2021. Last Updated July 1, 2020. Available: https://semiengineering.com/reducing-hardware-security-risk/
 +
 
 +
SAE. 2016. SAE AS6171, ''Test Methods Standard: General Requirements, Suspect/Counterfeit, Electrical, Electronic, and Electromechanical Parts.'' SAE International. Accessed March 31, 2021. Available: https://www.sae.org/standards/content/as6171/
 +
 
 +
SAE. 2021. SAE J1739_202101, ''Potential Failure Mode and Effects Analysis (FMEA) Including Design FMEA'', ''Supplemental FMEA-MSR, and Process FMEA.'' SAE International. Accessed March 31, 2021. Last Updated January 13, 2021. Available: https://www.sae.org/standards/content/j1739_202101/
  
 
===Primary References===
 
===Primary References===
Add
+
Bhunia, S. and M. Tehranipoor. 2018. ''[[Hardware Security: A Hands-on Learning Approach]]''. Amsterdam, Netherlands: Elsevier Science.
 +
 
 +
ENISA. 2017. ''[[Hardware Threat Landscape and Good Practice Guide. Final Version 1.0]]''. European Union Agency for Cybersecurity. Accessed March 31, 2021. Available: https://www.enisa.europa.eu/publications/hardware-threat-landscape
 +
 
 +
TAME Steering Committee. 2019. ''[[Trusted and Assured Microelectronics Forum Working Group Reports]]''. Accessed March 31, 2021. Last Updated December 2019. Available: https://dforte.ece.ufl.edu/wp-content/uploads/sites/65/2020/08/TAME-Report-FINAL.pdf
  
 
===Additional References===
 
===Additional References===
Add
+
DARPA.  A DARPA Approach to Trusted Microelectronics. Accessed March 31, 2021. Available: [https://www.darpa.mil//about-us/darpa-approach-to-trusted-microelectronics https://www.darpa.mil/attachments/Background_FINAL3.pdf]
 +
 
 +
Fazzari, S. and R. Narumi. 2019. ''New & Old Challenges for Trusted and Assured Microelectronics''. Accessed March 31, 2021. Available: https://apps.dtic.mil/dtic/tr/fulltext/u2/1076110.pdf
 +
 
 +
IEEE. 2008-2020. IEEE International Symposium on Hardware Oriented Security and Trust (HOST). Annual symposium held since 2008 providing wealth of articles on hardware assurance.
 +
 
 +
Martin, R. 2019. "Hardware Assurance and Weakness Collaboration and Sharing (HAWCS)." Proceedings of the 2019 Software and Supply Chain Assurance Forum, September 17-18, 2019 in McLean, VA. Accessed March 31, 2021. Available: https://csrc.nist.gov/CSRC/media/Projects/cyber-supply-chain-risk-management/documents/SSCA/Fall_2019/WedPM2.2_Robert_Martin.pdf
 +
 
 +
NDIA. 2017. Trusted Microelectronics Joint Working Group: Team 3 White Paper: Trustable Microelectronics Standard Products. Accessed March 31, 2021. Available: https://www.ndia.org/-/media/sites/ndia/divisions/working-groups/tmjwg-documents/ndia-tm-jwg-team-3-white-paper-finalv3.ashx
 +
 
 +
Regenscheid, A. 2019. NIST SP 800-193, ''Platform Firmware Resiliency Guidelines''. Accessed March 31, 2021. Available: https://csrc.nist.gov/publications/detail/sp/800-193/final
  
 +
Ross, R., V. Pillitteri, R. Graubart, D. Bodeau, R. McQuaid. 2019. NIST SP 800-160 Vol. 2, ''Developing Cyber Resilient Systems – A Systems Security Engineering Approach''. Accessed March 31, 2021. Available: https://csrc.nist.gov/News/2019/sp-800-160-vol2-developing-cyber-resilient-systems
 
----
 
----
<center>[[Environmental Engineering|< Previous Article]] | [[Systems Engineering and Quality Attributes|Parent Article]] | [[Systems Engineering Implementation Examples|Next Article (Part 7) >]]</center>
+
<center>[[System Affordability|< Previous Article]] | [[Systems Engineering and Quality Attributes|Parent Article]] | [[System Reliability, Availability, and Maintainability|Next Article >]]</center>
  
<center>'''SEBoK v. 2.3, released 30 October 2020'''</center>
+
<center>'''SEBoK v. 2.7, released 31 October 2022'''</center>
  
 
[[Category: Part 6]]
 
[[Category: Part 6]]
 
[[Category:Topic]]
 
[[Category:Topic]]
 
[[Category:Systems Engineering and Quality Attributes]]
 
[[Category:Systems Engineering and Quality Attributes]]

Latest revision as of 08:33, 10 October 2022


Authors: Michael Bear, Donald Davidson, Shawn Fetterolf, Darin Leonhardt, Daniel Radack, Karen Johnson, Elizabeth A. McDaniel Contributors: Michael Berry, Brian Cohen, Diganta Das, Houman Homayoun, Thomas McDermott


This article describes the discipline of hardware assurance, especially as it relates to systems engineering. It is part of the SE and Quality Attributes Knowledge Area.

Overview

System hardware assuranceSystem hardware assurance is a set of system security engineering activities (see System Security for more information) undertaken to quantify and increase the confidence that electronics function as intended and only as intended throughout their life cycle, and to manage identified risks. The term hardware refers to electronic components, sometimes called integrated circuits or chips. As products of multi-stage processes involving design, manufacturing and post-manufacturing, packaging, and test, they must function properly under a wide range of circumstances. Hardware components – alone and integrated into subcomponents, subsystems, and systems – have weaknesses and vulnerabilities enabling exploitation. Weaknesses are flaws, bugs, or errors in design, architecture, code, or implementation. Vulnerabilities are weaknesses that are exploitable in the context of use (Martin 2014).

Hardware assurance is conducted to minimize risks related to hardware that can enable adversarial exploitation and subversion of functionality, counterfeit production, and loss of technological advantage.  Challenges include increasing levels of sophistication and complexity of hardware architectures, integrated circuits, operating systems, and application software, combined with supply chain risks, emergence of new attack surfaces, and reliance on global sources for some components and technologies.

After identifying concerns and applicable mitigations, hardware assurance offers a range of possible activities and processes. At the component level, hardware assurance focuses on the hardware itself and the supply chain used to design and manufacture it; at the subcomponent, subsystems, and system levels, hardware assurance incorporates the software and firmware integrated with the component.

Engineering efforts to enhance trust in hardware have increased in response to complex hardware architectures, the increasing sophistication of adversarial attacks on hardware, and globalization of supply chains. These factors raise serious concerns about the security, confidentiality, integrity, and availability as well as the provenance and authenticity of hardware. The “root of trust” (NIST 2020) of a system is typically contained in the processes, steps, and layers of hardware components and across the systems engineering development cycle. System hardware assurance focuses on hardware components and their interconnections with software and firmware to reduce risks to proper function or other compromises of the hardware throughout the complete life cycle of components and systems. Advances in hardware assurance tools and techniques will strengthen designs, and enhance assurance during manufacturing, packaging, test, and deployment and operational use.

Life Cycle Concerns of Hardware Components

Hardware assurance should be applied at various stages of a component’s life cycle from hardware architecture and design, through manufacturing and testing, and finally throughout its inclusion in a larger system. The need for hardware assurance then continues throughout its operational life including sustainment and disposal.

As semiconductor technology advances the complexity of electronic components, it increases the need to “bake-in” assurance. Risks created during architecture, design, and manufacturing are challenging to address during the operational phase. Risks associated with interconnections between and among chips are also a concern. Therefore, improving a hardware assurance posture must occur as early as possible in the life cycle, thereby reducing the cost and schedule impacts associated with “fixing” components later in the life cycle of the system.

A conceptual overview of the typical hardware life cycle (Figure 1) illustrates the phases of the life cycle of components, as well as the subsystems and systems in which they operate. In each phase multiple parties and processes contribute a large set of variables and corresponding attack surfaces. As a result, the potential exists for compromise of the hardware as well as the subcomponents and systems in which they operate; therefore, matching mitigations should be applied at the time the risks are identified.

Figure 1. Component Life Cycle. (SEBoK Original)

Both the value of the hardware component and the associated cost of mitigating risks increase at each stage of the life cycle. Therefore, it is important to identify and mitigate vulnerabilities as early as possible. It takes longer to find and fix defects later, thereby increasing the complexity of replacing hardware with “corrected” designs that create system integration issues. In addition to cost savings, early correction and mitigation avoid delays in creating an operational system. It is essential to re-assess risks associated with hardware components throughout the life cycle periodically, especially as operational conditions change.

Hardware assurance during system sustainment is a novel challenge given legacy hardware and designs with their associated supply chains. In long-lived high-reliability systems, hardware assurance issues are compounded by obsolescence and diminished sourcing of components, thereby increasing concerns related to counterfeits and acquisitions from the gray market.  

Function as Intended and Only as Intended

Exhaustive testing can check system functions against specifications and expectations; however, checking for unintended functions is problematic. Consumers have a reasonable expectation that a purchased product will perform as advertised and function properly (safely and securely, under specified conditions) – but consumers rarely consider if additional functions are built into the product. For example, a laptop with a web-conferencing capability comes with a webcam that will function properly when enabled, but what if the webcam also functions when turned off, thereby violating expectations of privacy? Given that a state-of-the-art semiconductor component might have billions of transistors, “hidden” functions might be exploitable by adversaries. The statement “function as intended and only intended” communicates the need to check for unintended functions.

Hardware specifications and information in the design phase are needed to validate that components function properly to support systems or missions. If an engineer creates specifications that support assurance that flow down the system development process, the concept of “function as intended” can be validated for the system and mission through accepted verification and validation processes. “Function only as intended” is also a consequence of capturing the requirements and specifications to assure the product is designed and developed without extra functionality. For example, a Field Programmable Gate Array (FPGA) contains programmable logic that is highly configurable; however, the programmable circuitry might be susceptible to exploitation.

Given the specifications of a hardware component, specialized tools and processes can be used to determine with a high degree of confidence whether the component’s performance meets specifications. Research efforts are underway to develop robust methods to validate that a component does not have capabilities that threaten assurance or that are not specified in the original design. Although tools and processes can test for known weaknesses, operational vulnerabilities, and deviations from expected performance, all states of possible anomalous behavior cannot currently be determined or predicted.

Data and information can be used to validate the component’s function and should be collected from multiple sources including designers, developers, and members of the user community. Designers and developers can provide deep understanding of the component’s intended function and provide tests used to verify its functional performance before fielding. The merging of component design and development information with extensive field data, including third-party evaluation, contributes to assurance that the component is performing specified functions and that no unintended functionality is observed. 

Risks to Hardware

Modern systems depend on complex microelectronics, but advances in hardware without attention to associated risks can expose critical systems, their information, and the people who rely on them. “Hardware is evolving rapidly, thus creating fundamentally new attack surfaces, many of which will never be entirely secured”. (Oberg 2020)  Therefore, it is imperative that risk be modeled through a dynamic risk profile and be mitigated in depth across the entire profile. Hardware assurance requires extensible mitigations and strategies that can and do evolve as threats do. Hardware assurance methods seek to quantify and improve confidence that weaknesses that can become vulnerabilities that create risks are mitigated.

Most hardware components are commercially designed, manufactured, and inserted into larger assemblies by multi-national companies with global supply chains. Understanding the provenance and participants in complex global supply chains is fundamental to assessing risks associated with the components.

Operational risks that derive from unintentional or intentional features are differentiated based on the source of the feature. Three basic operational risk areas related to goods, products, or items are: failure to meet quality standards, maliciously tainted goods, and counterfeit hardware. Counterfeits are usually offered as legitimate products, but they are not. They may be refurbished or mock items made to appear as originals, re-marked products, the result of overproduction, or substandard production parts rejected by the legitimate producer. Counterfeit risks and substandard quality offer avenues for malware insertion and potential impacts to overall system performance and availability.

Failure to follow quality standards including safety and security standards, especially in design, can result in unintentional features or flaws being inadvertently introduced. These can occur through mistakes, omissions, or lack of understanding how features might be manipulated by future users for nefarious purposes. Features introduced intentionally for specific purposes can also make the hardware susceptible to espionage or control of the hardware at some point in its life cycle.

Quantify and Improve Confidence  

The quantification of hardware assurance is a key technical challenge because of the complex interplay among designer, manufacturer and supply chains, and adversarial intent, as well as the challenge of defining “security” with respect to hardware function. Quantification is necessary to identify and manage hardware risks within program budgets and timeframes. It enables a determination of the required level of hardware assurance and whether quantification is achievable throughout the hardware’s life cycle.

Current methods for quantifying hardware assurance are adapted from the fields of quality and reliability engineering, which use methods like Failure Mode and Effects Analysis (FMEA). (SAE 2021) FMEA is semi-quantitative and combines probabilistic hardware failure data and input from experts. Adapting FMEA to quantify hardware assurance is hampered when it relies on assigning probabilities to human behavior that may be motivated by money, malicious intent, etc. Expert opinion often varies when quantifying and weighting factors used in generating risk matrices and scores. In response, recent efforts are attempting to develop quantitative methods that reduce subjectivity.

Game theoretic analysis (game theory) is the creation of mathematical models of conflict and cooperation between intelligent and rational decision-makers. (Myerson 1991) Models include dynamic, as opposed to static, interactions between attackers and defenders that can quantify the risks associated with potential interactions among adversaries, hardware developers, and manufacturing processes. (Eames and Johnson 2017) Creation of the models forces one to define attack scenarios explicitly and to input detailed knowledge of hardware development and manufacturing processes. Outputs of the model may include a ranking of the most likely attacks to occur based on cost-benefit constraints on the attackers and defenders. (Graf 2017) The results can empower decision-makers to make quantitative trade-off decisions about hardware assurance.

Another quantification method that results in a confidence interval for detecting counterfeit/suspect microelectronics is presented in the SAE AS6171 standard. (SAE 2016) Confidence is based on knowing the types of defects associated with counterfeits, and the effectiveness of different tests to detect those defects. Along the same lines, a standard for hardware assurance might be developed to quantify the confidence interval by testing against a catalogue of known vulnerabilities, such as those documented in the MITRE Common Vulnerabilities and Exposures (CVE) list. (MITRE 2020)

Distributed ledger technology (DLT) is an example of an emerging technology that could enable a standardized approach for quantifying hardware assurance attributes such as data integrity, immutability, and traceability. DLT can be used in conjunction with manufacturing data (such as dimensional measurement, parametric testing, process monitoring, and defect mapping) to improve tamper resistance using component provenance and traceability data. DLT also enables new scenarios of cross-organizational data fusion, opening the door to new classes of hardware integrity checks.  

Manage Risks

The selection of specific components for use in subsystems and systems should be the outcome of performance-risk and cost-benefit trade-off assessments in their intended context of use. The goal of risk management and mitigation planning is to select mitigations with the best overall operational risk reduction and the lowest cost impact. The required level of hardware assurance varies with the criticality of a component's use and the system in which it is used.  

During a typical development life cycle of a system – architecture, design, code, and implementation – various types of problems can pose risks to the operational functionality of the hardware components provided. These risks include weaknesses or defects that are inadvertent (unintentional), as well as counterfeits that may be either inadvertent or intentionally injected into the supply chain for financial motivations or malicious components designed to change functionality.

Managing risk in the context of hardware assurance seeks to decrease the risk of weaknesses that create attack surfaces that can be exploited, while improving confidence that an implementation resists exploitation. Ideally, risk management reduces risk and maximizes assurance to an acceptable level. Often, risks are considered in the context of likelihood of consequences and the costs and effectiveness of mitigations. However, new operationally impactful risks are recognized continuously over the hardware life cycle and supply chains of components. At the same time hardware weaknesses are often exploited through software or firmware. Therefore, to maximize assurance and minimize operationally impactful risks mitigation-in-depth across all constituent components must be considered. This highlights the need for a dynamic risk profile.

An example of a post-manufacturing mitigation involves a new hardware risk identified during field operation. A dynamic risk profile can be used to characterize the issue and identify possible resources to address the suspect component function. This profile can also be used to track and address risks throughout its life, including obsolescence-related risk. One means of mitigating this kind of hardware life cycle risk is the use of existing programmable logic.

Just as with software patches and updates, new attack surfaces on hardware may become exposed through the mitigation being applied, and they will likely take a long time to discover. In the example above, the programmable logic is updated to provide a new configuration to protect the hardware. In this context, access to hardware reconfiguration must be limited to authorized parties to prevent an unauthorized update that introduces weaknesses on purpose or by accident. While programmable logic may have mitigated a specific attack surface or type of weakness, additional mitigations are needed to minimize risk more completely. This is mitigation-in-depth – multiple mitigations building upon one another.

Throughout the entire supply chain, critical pieces of information can be inadvertently exposed. The exposure of such information directly enables the creation and exploitation of new attack surfaces. Therefore, the supply chain infrastructure must also be assessed for weaknesses, and the development, use, and maintenance of hardware components assured.  The dynamic risk profile offers a framework to balance mitigations in the context of risk and cost throughout the complete hardware and system life cycles.

References

Works Cited

Eames, B.K. and M.H. Johnson. 2017. “Trust Analysis in FPGA-based Systems.” Proceeding of GOMACTech 2017, March 20-23, 2017, Reno, NV.

Graf, J. 2017. “OpTrust: Software for Determining Optimal Test Coverage and Strategies for Trust.” Proceedings of GOMACTech 2017, March 20-23, 2017, Reno, NV.

Martin, R.A. 2014. “Non-Malicious Taint: Bad Hygiene is as Dangerous to the Mission as Malicious Intent.” CrossTalk Magazine. 27(2).

MITRE. 2020. “Common Vulnerabilities and Exposures.” Accessed March 31, 2021. Last Updated December 11, 2020. Available: https://cve.mitre.org/cve/

Myerson, R.R. 1991. Game Theory: Analysis of Conflict.  Cambridge, MA: Harvard University Press.

NIST. 2020. Roots of Trust. Accessed March 31, 2021. Last Updated June 22, 2020. Available: https://csrc.nist.gov/projects/hardware-roots-of-trust

Oberg, J. 2020. Reducing Hardware Security Risk. Accessed March 31, 2021. Last Updated July 1, 2020. Available: https://semiengineering.com/reducing-hardware-security-risk/

SAE. 2016. SAE AS6171, Test Methods Standard: General Requirements, Suspect/Counterfeit, Electrical, Electronic, and Electromechanical Parts. SAE International. Accessed March 31, 2021. Available: https://www.sae.org/standards/content/as6171/

SAE. 2021. SAE J1739_202101, Potential Failure Mode and Effects Analysis (FMEA) Including Design FMEA, Supplemental FMEA-MSR, and Process FMEA. SAE International. Accessed March 31, 2021. Last Updated January 13, 2021. Available: https://www.sae.org/standards/content/j1739_202101/

Primary References

Bhunia, S. and M. Tehranipoor. 2018. Hardware Security: A Hands-on Learning Approach. Amsterdam, Netherlands: Elsevier Science.

ENISA. 2017. Hardware Threat Landscape and Good Practice Guide. Final Version 1.0. European Union Agency for Cybersecurity. Accessed March 31, 2021. Available: https://www.enisa.europa.eu/publications/hardware-threat-landscape

TAME Steering Committee. 2019. Trusted and Assured Microelectronics Forum Working Group Reports. Accessed March 31, 2021. Last Updated December 2019. Available: https://dforte.ece.ufl.edu/wp-content/uploads/sites/65/2020/08/TAME-Report-FINAL.pdf

Additional References

DARPA. A DARPA Approach to Trusted Microelectronics. Accessed March 31, 2021. Available: https://www.darpa.mil/attachments/Background_FINAL3.pdf

Fazzari, S. and R. Narumi. 2019. New & Old Challenges for Trusted and Assured Microelectronics. Accessed March 31, 2021. Available: https://apps.dtic.mil/dtic/tr/fulltext/u2/1076110.pdf

IEEE. 2008-2020. IEEE International Symposium on Hardware Oriented Security and Trust (HOST). Annual symposium held since 2008 providing wealth of articles on hardware assurance.

Martin, R. 2019. "Hardware Assurance and Weakness Collaboration and Sharing (HAWCS)." Proceedings of the 2019 Software and Supply Chain Assurance Forum, September 17-18, 2019 in McLean, VA. Accessed March 31, 2021. Available: https://csrc.nist.gov/CSRC/media/Projects/cyber-supply-chain-risk-management/documents/SSCA/Fall_2019/WedPM2.2_Robert_Martin.pdf

NDIA. 2017. Trusted Microelectronics Joint Working Group: Team 3 White Paper: Trustable Microelectronics Standard Products. Accessed March 31, 2021. Available: https://www.ndia.org/-/media/sites/ndia/divisions/working-groups/tmjwg-documents/ndia-tm-jwg-team-3-white-paper-finalv3.ashx

Regenscheid, A. 2019. NIST SP 800-193, Platform Firmware Resiliency Guidelines. Accessed March 31, 2021. Available: https://csrc.nist.gov/publications/detail/sp/800-193/final

Ross, R., V. Pillitteri, R. Graubart, D. Bodeau, R. McQuaid. 2019. NIST SP 800-160 Vol. 2, Developing Cyber Resilient Systems – A Systems Security Engineering Approach. Accessed March 31, 2021. Available: https://csrc.nist.gov/News/2019/sp-800-160-vol2-developing-cyber-resilient-systems


< Previous Article | Parent Article | Next Article >
SEBoK v. 2.7, released 31 October 2022