Difference between pages "Introduction to System Fundamentals" and "Measurement"

From SEBoK
(Difference between pages)
Jump to: navigation, search
 
(Tech and grammar edits as discussed with Bkcase)
 
Line 1: Line 1:
----
+
The purpose of {{Term|Risk Management (glossary)|risk management}} is to reduce potential {{Term|Risk (glossary)|risks}} to an acceptable level before they occur, throughout the life of the product or project.  Risk management is a continuous, forward-looking process that is applied to anticipate and avert risks that may adversely impact the project, and can be considered both a {{Term|Project Management (glossary)|project management}} and a {{Term|Systems Engineering (glossary)|systems engineering}} process. A balance must be achieved on each project in terms of overall risk management ownership, implementation, and day-to-day responsibility between these two top-level processes.
'''''by Janet Singer, Duane Hybertson, and [[User:radcock|Rick Adcock]]'''''
+
 
----
+
For the SEBoK, risk management falls under the umbrella of [[Systems Engineering Management]], though the wider body of risk literature is explored below.
This article forms part of the [[Systems Fundamentals]] knowledge area (KA). It provides various perspectives on {{Term|System (glossary)|systems}}, including definitions, {{Term|Scope (glossary)|scope}}, and {{Term|Context (glossary)|context}}.  
+
 
 +
==Risk Management Process Overview==
 +
Risk is a measure of the potential inability to achieve overall program objectives within defined cost, schedule, and technical constraints. It has the following two components (DAU 2003a):
 +
# the probability (or likelihood) of failing to achieve a particular outcome
 +
# the consequences (or impact) of failing to achieve that outcome
 +
In the domain of catastrophic risk analysis, risk has three components: (1) threat, (2) vulnerability, and (3) consequence (Willis et al. 2005).
 +
 
 +
Risk management involves defining a risk management strategy, identifying and analyzing risks, handling selected risks, and monitoring the progress in reducing risks to an acceptable level (SEI 2010; DoD 2015; DAU 2003a; DAU 2003b; PMI 2013)  ({{Term|Opportunity (glossary)|Opportunity}} and opportunity management is briefly discussed below).
 +
 
 +
The SE risk management process includes the following activities:
 +
* risk planning
 +
* risk identification
 +
* risk analysis
 +
* risk handling
 +
* risk monitoring
 +
ISO/IEC/IEEE 16085 provides a detailed set of risk management activities and tasks which can be utilized in a risk management process aligned with ISO 31000:2009, Risk management — Principles and Guidelines, and ISO Guide 73:2009,
 +
 
 +
Risk management — Vocabulary. ISO 9001:2008 standard provides risk-based preventive action requirements in subclause 8.5.3.
  
This article provides a guide to some of the basic {{Term|Concept (glossary)|concepts}} of systems developed by {{Term|Systems Science (glossary)|systems science}} and discusses how these relate to the definitions to be found in {{Term|Systems Engineering (glossary)|systems engineering}} (SE) literature. The concept of an {{Term|Engineered System (glossary)|engineered system}} is introduced as the system context of critical relevance to SE.
+
The Risk Management Process section of the INCOSE Systems Engineering Handbook: A Guide for Systems Life Cycle Processes and Activities, 4th Edition, provides a comprehensive overview of risk management which is intended to be consistent with the Risk Management Process section of ISO 15288.  
  
----
+
===Risk Planning===
<center></center>
+
Risk planning establishes and maintains a strategy for identifying, analyzing, handling, and monitoring risks within the project. The strategy, both the process and its implementation, isprocess and implementation of the strategy are documented in a risk management plan (RMP).
==Overview==
 
  
In the System Fundamentals KA we will define some terms and ideas which are foundational to the understanding and practice of Systems Engineering (SE).  In particular, a number of views of system are explored; these are summarized below and described in more detail with links to relevant references in the rest of this article.
+
The risk management process and its implementation should be tailored to each project and updated as appropriate throughout the life of the projectThe RMP should be transmitted in an appropriate means to the project team and key stakeholders.  
*A simple definition of '''System is any set of related parts for which there is sufficient coherence between the parts to make viewing them as a whole useful'''.  If we consider more complex situations in which the parts of a system can also be viewed as systems we can identify useful common systems concepts to aid our understanding.  This allows the creation of systems theories, models and approaches useful to anyone trying to understand, create or use collections of related things, independent of what the system is made of or the application domain considering it.
 
*Many of these common systems ideas relate to complex networks or hierarchies of related system elementsA '''System Context is a set of system interrelationships associated with a particular system of interest (SoI) within a real world environment'''.  One or more views of a context allow us to focus on the SoI but not lose sight of its broader, holistic relationships and influences.  Context can be used for many kinds of system but is particularly useful for scoping problems and enabling the creation of solutions which combine people and technology and operate in the natural world. These are referred to as socio-technical system contexts.
 
*Systems Engineering is one of the disciplines interested in socio-technical systems across their whole life.  This includes where problems come from and how they are defined, how we identify and select candidate solutions, how to balance technology and human elements in the wider solution context, how to manage the complex organizational systems needed to develop new solutions, and how developed solutions are used, sustained and disposed of.  To support this we define an '''{{Term|Engineered System (glossary)|Engineered System}} as a socio-technical system which is the focus of a Systems Engineering life cycle.'''
 
*While SE is focused on the delivery of an engineered system of interest a '''SE should consider the full Engineered System Context so that the necessary understanding can be reached and the right systems engineering decisions can be made across each Life Cycle.'''
 
  
==A General View of Systems==
+
The risk management strategy includes as necessary the risk management process of all supply chain suppliers and describes how risks from all suppliers will be raised to the next level(s) for incorporation in the project risk process.
  
The idea of a system whole can be found in both Western and Eastern philosophy. Many philosophers have considered notions of {{Term|Holism (glossary)|holism}}; that ideas, people or things must be considered in relation to the things around them to be fully understood (M’Pherson 1974).
+
The context of the Risk Management process should include a description of stakeholders’ perspectives, risk categories, and a description (perhaps by reference) of the technical and managerial objectives, assumptions and constraints. The risk categories include the relevant technical areas of the system and facilitate identification of risks across the life cycle of the system. As noted in ISO 31000, the aim of this step is to generate a comprehensive list of risks based on those events that might create, enhance, prevent, degrade, accelerate or delay the achievement of objectives.
  
One influential systems science definition of a system comes from {{Term|General System Theory (glossary)|general system theory}} (GST):  
+
The RMP should contain key risk management information; Conrow (2003) identifies the following as key components of RMP:
 +
* a project summary
 +
* project acquisition and contracting strategies
 +
* key definitions
 +
* a list of key documents
 +
* process steps
 +
* inputs, tools and techniques, and outputs per process step
 +
* linkages between risk management and other project processes
 +
* key ground rules and assumptions
 +
* risk categories
 +
* buyer and seller roles and responsibilities
 +
* organizational and personnel roles and responsibilities 
  
<blockquote> "A System is a set of elements in interaction." (Bertalanffy 1968)</blockquote>''<nowiki/>''
+
Generally, the level of detail in an RMP is risk-driven, with simple plans for low risk projects and detailed plans for high risk projects.
The parts of a system may be conceptual organizations of ideas in symbolic form or real objects. GST considers '''abstract systems''' to contain only conceptual elements and '''concrete systems''' to contain at least two elements that are real objects, e.g. people, information, {{Term|Software (glossary)|software}} and physical artifacts, etc.
 
  
Similar ideas of wholeness can be found in systems engineering literature. For example:
+
===Risk Identification===
<blockquote> ''We believe that the essence of a system is 'togetherness', the drawing together of various parts and the relationships they form in order to produce a new whole…'' (Boardman and Sauser 2008). </blockquote>''<nowiki/>''
+
Risk identification is the process of examining the project products, processes, and requirements to identify and document candidate risks. Risk identification should be performed continuously at the individual level as well as through formerly structured events at both regular intervals and following major program changes (e.g., project initiation, re-baselining, change in acquisition phase, etc.).
  
The {{Term|Cohesion (glossary)|cohesive}} interactions between a set of parts suggest a {{Term|System Boundary (glossary)|system boundary}} and defines what membership of the system means. For {{Term|Closed System (glossary)|'''closed systems'''}} all aspects of the system exist within this boundary. This idea is useful for abstract systems and for some theoretical system descriptions.  
+
Conrow (2009) states that systems engineers should use one or more top-level approaches (e.g., work breakdown structure (WBS), key processes evaluation, key requirements evaluation, etc.) and one or more lower-level approaches (e.g., affinity, brainstorming, checklists and taxonomies, examining critical path activities, expert judgment, Ishikawa diagrams, etc.) in risk identification. For example, lower-level checklists and taxonomies exist for software risk identification (Conrow and Shishido 1997, 83-89, p. 84; Boehm 1989, 115-125, Carr et al. 1993, p. A-2) and operational risk identification (Gallagher et al. 2005, p. 4), and have been used on a wide variety of programs. The top and lower-level approaches are essential but there is no single accepted method — all approaches should be examined and used as appropriate.  
  
The boundary of an {{Term|Open System (glossary)|'''open systems''' (glossary)}} defines elements and relationships which can be considered part of the system and describe how these elements interact across the boundary with related elements in the {{Term|Environment (glossary)|environment (glossary)}}.  The relationships among the elements of an open system can be understood as a combination of the systems {{Term|Structure (glossary)|structure}} and {{Term|Behavior (glossary)|behavior}}. The structure of a system describes a set of system elements and the allowable relationships between them. System behavior refers to the effects or outcomes produced when an instance of the system interacts with its {{Term|Environment (glossary)|environment}}. An allowable configuration of the relationships among elements is referred to as a system {{Term|State (glossary)|state}}.  A stable system is one which returns to its original, or another stable, state following a disturbance in the environment.'' System wholes entities often exhibit {{Term|Emergence (glossary)|emergence}}, behavior which is meaningful only when attributed to the whole, not to its parts'' (Checkland 1999).  
+
Candidate risk documentation should include the following items where possible, as identified by Conrow (2003 p.198)
 +
* risk title
 +
* structured risk description
 +
* applicable risk categories
 +
* potential root causes
 +
* relevant historical information
 +
* responsible individual and manager
 +
It is important to use structured risk descriptions such as an ''if-then'' format: ''if'' (an event occurs--trigger), ''then ''(an outcome or aeffect occurs). Another useful construct is a ''condition'' (that exists) that leads to a potential ''consequence'' (outcome) (Gluch 1994). These approaches help the analyst to better think through the potential nature of the risk.
  
The identification of a system and its boundary is ultimately the choice of the observer. This may be through observation and classification of sets of elements as systems, through an abstract conceptualisation of one or more possible boundaries and relationships in a given situation, or a mixture of this concrete and conceptual thinking.  This underlines the fact that any particular identification of a system is a human construct used to help make better sense of a set of things and to share that understanding with others if needed.
+
Risk analysis and risk handling activities should only be performed on approved risks to ensure the best use of scarce resources and maintain focus on the correct risks.
  
Many natural, social and man made things can be better understood by viewing them as open systems. One of the reasons we find the idea of systems useful is that it is possible to identify shared concepts which apply to many system views.  These recurring concepts or isomorphies can give useful insights into many situations, independently of the kinds of elements a particular system is made up of. The ideas of structure, behavior, emergence and state are examples of such concepts.The identification of these shared system ideas is the basis for {{Term|Systems Thinking (glossary)|Systems Thinking}} and their use in developing theories and approaches in a wide range of fields of study the {{Term|Systems Science (glossary)|system sciences}}.  
+
===Risk Analysis===
 +
Risk analysis is the process of systematically evaluating each identified, approved risk to estimate the probability of occurrence (likelihood) and consequence of occurrence (impact), and then converting the results to a corresponding risk level or rating.
  
Systems Engineering (SE), and a number of other [[Related Disciplines]] use systems concepts, patterns and models in the creation of useful outcomes or things.  The concept of a {{Term|Network (glossary)|network}} of open systems created, sustained and used to achieve a {{Term|Purpose (glossary)|purpose}} within one or more environments is a powerful {{Term|Model (glossary)|model}} that can be used to understand many complex real world situations and provide a basis for effective {{Term|Problem (glossary)|problem solving}} within them.
+
There is no ''best'' analysis approach for a given risk category. Risk scales and a corresponding matrix, simulations, and probabilistic risk assessments are often used for technical risks, while decision trees, simulations and payoff matrices are used for cost risk; and simulations are used for schedule risk. Risk analysis approaches are sometimes grouped into qualitative and quantitative methods. A structured, repeatable methodology should be used in order to increase analysis accuracy and reduce uncertainty over time.
  
==System Context==
+
The most common qualitative method (typically) uses ordinal probability and consequence scales coupled with a risk matrix (also known as a risk cube or mapping matrix) to convert the resulting values to a risk level. Here, one or more probability of occurrence scales, coupled with three consequences of occurrence scales (cost, performance, schedule) are typically used. Mathematical operations should not be performed on ordinal scale values to prevent erroneous results (Conrow 2003, p. 187-364).
  
Bertalanffy (1968) divided open systems into nine real world types ranging from static structures and control mechanisms to socio-cultural systems. Other similar classification systems are discussed in the article [[Types of Systems]].
+
Once the risk level for each risk is determined, the risks need to be prioritized. Prioritization is typically performed by risk level (e.g., low, medium, high), risk score (the pair of max (probability), max (consequence) values), and other considerations such as time-frame, frequency of occurrence, and interrelationship with other risks (Conrow 2003, pp. 187-364). An additional prioritization technique is to convert results into an estimated cost, performance, and schedule value (e.g., probability budget consequence). However, the result is only a point estimate and not a distribution of risk.
  
The following is a simple classification of system elements which we find at the heart of many of these classifications:
+
Widely used quantitative methods include decision trees and the associated expected monetary value analysis (Clemen and Reilly 2001), modeling and simulation (Law 2007; Mun 2010; Vose 2000), payoff matrices (Kerzner 2009, p. 747-751), probabilistic risk assessments (Kumamoto and Henley 1996; NASA 2002), and other techniques.  Risk prioritization can directly result from the quantitative methods employed. For quantitative approaches, care is needed in developing the model structure, since the results will only be as good as the accuracy of the structure, coupled with the characteristics of probability estimates or distributions used to model the risks  (Law 2007; Evans, Hastings, and Peacock 2011).
* {{Term|Natural System (glossary)|Natural system}} elements, objects or concepts which exist outside of any practical human control. Examples: the real number system, the solar system, planetary atmosphere circulation systems.
 
* {{Term|Social System (glossary)|Social system}} elements, either abstract human types or social constructs, or concrete individuals or social groups.
 
* Technological System elements, man-made artifacts or constructs; including physical hardware, {{Term|Software (glossary)|software}}<nowiki/> and information.
 
  
While the above distinctions can be made as a general abstract classification, in reality there are no hard and fast boundaries between these types of systems: e.g., natural  systems are operated by, developed by, and often contain social systems and social systems depend on technical systems to fully realize their purpose. Systems which contain technical and either human or natural elements, are often called {{Term|Sociotechnical System (glossary)|socio-technical systems}}. The behavior of such systems is determined both by the nature of the technical elements and by their ability to integrate with or deal with the variability of the natural and social systems around them.
+
If multiple risk facets exist for a given item (e.g., cost risk, schedule risk, and technical risk) the different results should be integrated into a cohesive three-dimensional ''picture'' of risk. Sensitivity analyses can be applied to both qualitative and quantitative approaches in an attempt to understand how potential variability will affect results. Particular emphasis should be paid to compound risks (e.g., highly coupled technical risks with inadequate fixed budgets and schedules).
  
Many of the original ideas upon which GST, and other branches of system study, are based come from the study of systems in the natural and social sciences.  Many natural and social systems are initially formed as simple structures through the inherent {{Term|Cohesion (glossary)|cohesion}} among a set of elements.  Once formed, they will tend to stay in this structure, as well as combine and evolve further into more complex stable states to exploit this cohesion in order to sustain themselves in the face of threats or environmental pressures.  Such complex systems may exhibit specialization of elements, with elements taking on roles which contribute to the system purpose, but loosing some or all of their separate identify outside of the system.  Such roles might include management of resources, defense, self-regulation or problem solving and control. Natural and social systems can be understood through an understanding of this wholeness, cohesion and specialization.  They can also be guided towards the development of behaviors which not only enhance their basic survival, but also fulfill other goals of benefit to them or the systems around them.  In ''The Architecture of Complexity'' (Simon 1962) has shown that natural or social systems which evolve via a series of stable “hierarchical intermediate forms” will be more successful and resilient to environmental change.
+
===Risk Handling===
 +
Risk handling is the process that identifies and selects options and implements the desired option to reduce a risk to an acceptable level, given program constraints (budget, other resources) and objectives (DAU 2003a, 20-23, 70-78).  
  
Thus, it is often true that the environment in which a particular system sits and the elements of that system can themselves be considered as open systems.  It can be useful to consider collections of related elements as both a system and a part of one or more other systems. For example, a “holon” or {{Term|System Element (glossary)}} was defined by Koestler as something which exists simultaneously a whole and as a part (Koestler 1967). At some point, the nature of the relationships between elements within and across boundaries in a hierarchy of systems may lead to {{Term|Complexity (glossary)|complex}} structures and emergent behaviors which are difficult to understand or predict. Such complexity can often best be dealt with not only by looking for more detail, but also by considering the wider open system relationships.
+
For a given {{Term|System-of-Interest (glossary)|system-of-interest}} (SoI), risk handling is primarily performed at two levels. At the system level, the overall ensemble of system risks is initially determined and prioritized and second-level draft risk element plans (REP's) are prepared for handling the risks. For more complex systems, it is important that the REP's at the higher SoI level are kept consistent with the system RMPs at the lower SoI level, and that the top-level RMP preserves continuing risk traceability across the SoI.
  
[[File:Part2 Environment 201905.jpg|thumb|550px|'''Figure 1: General description of System Context (SEBoK Original)'''|center]]
+
The risk handling strategy selected is the combination of the most desirable risk handling option coupled with a suitable implementation approach for that option (Conrow 2003). Risk handling options include assumption, avoidance, control (mitigation), and transfer. All four options should be evaluated and the best one chosen for each risk. An appropriate implementation approach is then chosen for that option. Hybrid strategies can be developed that include more than one risk handling option, but with a single implementation approach. Additional risk handling strategies can also be developed for a given risk and either implemented in parallel with the primary strategy or be made a contingent strategy that is implemented if a particular trigger event occurs during the execution of the primary strategy. Often, this choice is difficult because of uncertainties in the risk probabilities and impacts.  In such cases, buying information to reduce risk uncertainty via prototypes, benchmarking, surveying, modeling, etc. will clarify risk handling decisions (Boehm 1981).
  
A {{Term|Context (glossary)|system context}} describes all of the external elements which interact across the boundary of a particular {{Term|System of Interest (SoI) (glossary)|system of interest (SoI)}} and a sufficient view of the elements within its boundary, to allow the SoI to be better understood as part of a wider systems whole.  To fully understand the context we also need to identify the environment in which the SoI and wider system sit and the systems in the environment which influence them.
+
====Risk Handling Plans====
 +
A risk handling plan (RHP - a REP at the system level), should be developed and implemented for all ''high'' and ''medium'' risks and selected ''low'' risks as warranted.
  
Many man-made systems are designed as networks and hierarchies of related system elements to achieve desirable behaviors and the kinds of the resilience seen in natural systems. While such systems can be deliberately created to take advantage of system properties such as holism and stability, they must also consider system challenges such as complexity and emergence. Considering different views of a SoI and its context over its life can help enable this understanding. Considering systems in context allows us to focus on a SoI while maintaining the necessary wider, holistic systems perspective.  This is one of the foundations of the [[Systems Approach Applied to Engineered Systems|Systems Approach]] described in SEBoK part 2, and forms a foundation of systems engineering.
+
As identified by Conrow (2003, 365-387), each RHP should include:
 +
* a risk owner and management contacts
 +
* selected option
 +
* implementation approach
 +
* estimated probability and consequence of occurrence levels at the start and conclusion of each activity
 +
* specific measurable exit criteria for each activity
 +
* appropriate metrics
 +
* resources needed to implement the RHP
  
==Systems and Systems Engineering==
+
Metrics included in each RHP should provide an objective means of determining whether the risk handling strategy is on track and whether it needs to be updated. On larger projects these can include earned value, variation in schedule and technical performance measures (TPMs), and changes in risk level vs. time.
  
Some of the systems ideas discussed above form part of the systems engineering body of knowledge.  Systems engineering literature, standards and guides often refer to “the system” to characterize a socio-technical system with a defined purpose as the focus of SE, e.g.
+
The activities present in each RHP should be integrated into the project’s integrated master schedule or equivalent; otherwise there will be ineffective risk monitoring and controlthe risk monitoring and control will be ineffective.
  
* “A system is a value-delivering object” (Dori 2002).
+
===Risk Monitoring===
* “A system is an array of components designed to accomplish a particular objective according to plan” (Johnson, Kast, and Rosenzweig 1963).
+
Risk monitoring is used to evaluate the effectiveness of risk handling activities against established metrics and provide feedback to the other risk management process steps. Risk monitoring results may also provide a basis to update RHPs, develop additional risk handling options and approaches, and re-analyze risks. In some cases, monitoring results may also be used to identify new risks, revise an existing risk with a new facet, or revise some aspects of risk planning (DAU 2003a, p. 20). Some risk monitoring approaches that can be applied include earned value, program metrics, TPMs, schedule analysis, and variations in risk level. Risk monitoring approaches should be updated and evaluated at the same time and WBS level;, otherwise, the results may be inconsistent.
* “A system is defined as a set of concepts and/or elements used to satisfy a need or requirement" (Miles 1973).
 
  
The International Council on Systems Engineering Handbook (INCOSE 2015) generalizes this idea, defining system as “an interacting combination of elements to accomplish a defined objective. These include hardware, software, firmware, people, information, techniques, facilities, services, and other support elements."  While these definition covers the socio-technical systems created by SE it is also necessary to consider the natural or social problem situation in which these system sits, the social systems which developed, sustained and used them, and the commercial or public enterprises in which these all sit as systems (Martin 2004).
+
==Opportunity and Opportunity Management==
 +
In principle, opportunity management is the duality to risk management, with two components: (1) probability of achieving an improved outcome and (2) impact of achieving the outcome. Thus, both should be addressed in risk management planning and execution. In practice, however, a positive opportunity exposure will not match a negative risk exposure in utility space, since the positive utility magnitude of improving an expected outcome is considerably less than the negative utility magnitude of failing to meet an expected outcome (Canada 1971; Kahneman-Tversky 1979). Further, since many opportunity -management initiatives have failed to anticipate serious side effects, all candidate opportunities should be thoroughly evaluated for potential risks to prevent unintended consequences from occurring.
  
Hence, while many SE authors talk about systems and systems ideas they are often based on a particular world view which related to engineered artifacts.  It would also be useful to take a broader view of the context in which these artifacts sit, and to consider through life relationships as part of that context. To help promote this the SEBoK will try to be more precise with its use of the word system, and distinguish between general systems principles and the specific socio-technical systems created by SE.
+
In addition, while opportunities may provide potential benefits for the system or project, each opportunity pursued may have associated risks that detract from the expected benefit. This may reduce the ability to achieve the anticipated effects of the opportunity, in addition to any limitations associated with not pursing an opportunity.
  
The term socio-technical system is used by many in the systems community and may have meanings outside of that relevant to SE. Hence, we will define an {{Term|Engineered System (glossary)}} as a socio-technical system forms the primary focus or {{Term|System of Interest (SoI) (glossary)|system of interest (SoI)}} for an application of SEA SE {{Term|Life Cycle (glossary)}} will consider an engineered system context, from initial problem formulation through to final safe removal from use (INCOSE 2015). A more detailed discussion of engineered system context and how it relates to the foundations of systems engineering practice can be found below.
+
==Linkages to Other Systems Engineering Management Topics==
 +
The [[Measurement|measurement]] process provides indicators for risk analysis. Project [[Planning| planning]] involves the identification of risk and planning for stakeholder involvementProject [[Assessment and Control|assessment and control]] monitors project risks. [[Decision Management|Decision management]] evaluates alternatives for selection and handling of identified and analyzed risks.
  
==Introduction to Engineered Systems==
+
==Practical Considerations==
 +
Key pitfalls and good practices related to systems engineering risk management are described in the next two sections.
  
An {{Term|Engineered System (glossary)}} defines a context containing both technology and social or natural elements, developed for a defined purpose by an engineering {{Term|Life Cycle (glossary)|life cycle}}.  
+
===Pitfalls===
 +
Some of the key pitfalls encountered in performing risk management are below in Table 1.
  
Engineered System contexts:
+
{| 
*are created, used and sustained to achieve a purpose, goal or {{Term|Mission (glossary)|mission}} that is of interest to an {{Term|Enterprise (glossary)|enterprise}}, {{Term|Team (glossary)|team}}, or an individual.
+
|+ '''Table 1. Risk Management Pitfalls.''' (SEBoK Original)
*require a commitment of resources for development and support.
+
|-
*are driven by {{Term|Stakeholder (glossary)|stakeholders (glossary)}} with multiple views on the use or creation of the system, or with some other stake in the system, its properties or existence.
+
! Name
*contain engineered hardware, {{Term|Software (glossary)|software}}, people, {{Term|Service (glossary)|services}}, or a combination of these.
+
! Description
*exist within an environment that impacts the characteristics, use, sustainment and creation of the system.
+
|-
 +
| Process Over-Reliance
 +
|
 +
* Over-reliance on the process side of risk management without sufficient attention to human and organizational behavioral considerations.
 +
|-
 +
|Lack of Continuity
 +
|
 +
* Failure to implement risk management as a continuous process.  Risk management will be ineffective if it’s done just to satisfy project reviews or other discrete criteria.  (Charette, Dwinnell, and McGarry 2004, 18-24 and Scheinin 2008).
 +
|-
 +
|Tool and Technique Over-Reliance
 +
|
 +
* Over-reliance on tools and techniques, with insufficient thought and resources expended on how the process will be implemented and run on a day-to-day basis.
 +
|-
 +
| Lack of Vigilance
 +
|
 +
* A comprehensive risk identification will generally not capture all risks; some risks will always escape detection, which reinforces the need for risk identification to be performed continuously.
 +
|-
 +
|Automatic Mitigation Selection
 +
|
 +
* Automatically select the risk handling mitigation option, rather than evaluating all four options in an unbiased fashion and choosing the “best” option.
 +
|-
 +
|Sea of Green
 +
|
 +
* Tracking progress of the risk handling plan, while the plan itself may not adequately include steps to reduce the risk to an acceptable level.  Progress indicators may appear “green” (acceptable) associated with the risk handling plan:  budgeting, staffing, organizing, data gathering, model preparation, etc. However, the risk itself may be largely unaffected if the handling strategy and the resulting plan are poorly developed, do not address potential root cause(s), and do not incorporate actions that will effectively resolve the risk.
 +
|-
 +
|Band-Aid Risk Handling
 +
|
 +
* Handling risks (e.g., interoperability problems with changes in external systems) by patching each instance, rather than addressing the root cause(s) and reducing the likelihood of future instances.
 +
|}
  
Engineered systems typically
+
===Good Practices===
*are defined by their purpose, goal or mission.
+
Some good practices gathered from the references are below in Table 2.
*have a {{Term|Life Cycle (glossary)|life cycle (glossary)}} and evolution dynamics.
 
*may include human operators (interacting with the systems via processes) as well as other social and natural components that must be considered in the design and development of the system.
 
*are part of a {{Term|System-of-Interest (glossary)|system-of-interest}} hierarchy.
 
  
Open systems are a useful way to understand many complex situationsTraditional engineering disciplines have become very good at building up detailed models and design practices to deal with the complexity of tightly integrated collections of elements within a technology domain and it is possible to model the seemingly random integration of lots of similar elements using statistical approachesSystems Engineering makes use of both these aspects of system complexity, as discussed in the [[Complexity]] article.
+
{| 
 +
|+ '''Table 2. Risk Management Good Practices.''' (SEBoK Original)
 +
|-
 +
! Name
 +
! Description
 +
|-
 +
| Top Down and Bottom Up
 +
|
 +
* Risk management should be both “top down” and “bottom up” in order to be effectiveThe project manager or deputy need to own the process at the top level,  but risk management principles should be considered and used by all project personnel.
 +
|-
 +
| Early Planning
 +
|
 +
* Include the planning process step in the risk management process.  Failure to adequately perform risk planning early in the project phase contributes to ineffective risk management.
 +
|-
 +
| Risk Analysis Limitations
 +
|
 +
* Understand the limitations of risk analysis tools and techniques.  Risk analysis results should be challenged because considerable input uncertainty and/or potential errors may exist.
 +
|-
 +
| Robust Risk Handling Strategy
 +
|
 +
* The risk handling strategy should attempt to reduce both the probability and consequence of occurrence termsIt is also imperative that the resources needed to properly implement the chosen strategy be available in a timely manner, else the risk handling strategy, and the entire risk management process, will be viewed as a “paper tiger.
  
SE also considers the complexity of relatively small numbers of elements taken from a range of design disciplines together with people who may not always be experienced or have detailed training in their use. Such engineered systems may be deployed in uncertain or changing environments and be used to help people achieve a number of loosely defined outcomes.  For these systems relatively small changes in the internal working of their elements, or in how those elements are combined, may lead to the emergence of complex or un-expected outcomes. It can be difficult to predict and design for all such outcomes during an engineered systems creation, or to respond to them during its use. Iterative life cycle approaches which explore the complexity and emergence over a number of cycles of development and use are needed to deal with this aspect of complexityThe ways that system engineering deals with these aspects of complexity in the definition of life cycle and life cycle processes applied to an engineered system context is fully explored in [[Systems Engineering and Management|Part 3]]
+
|-
 +
| Structured Risk Monitoring
 +
|
 +
* Risk monitoring should be a structured approach to compare actual vs. anticipated cost, performance, schedule, and risk outcomes associated with implementing the RHPWhen ad-hoc or unstructured approaches are used, or when risk level vs. time is the only metric tracked, the resulting risk monitoring usefulness can be greatly reduced.
 +
|-
 +
| Update Risk Database
 +
|
 +
*The risk management database (registry) should be updated throughout the course of the program, striking a balance between excessive resources required and insufficient updates performedDatabase updates should occur at both a tailored, regular interval and following major program changes.
  
==Life Cycle Definitions==
+
|}
  
As well as being a kind of system an engineered system is also the focus of a life cycle and hence part of a commercial transaction. Historically,  
+
==References==
 +
===Works Cited===
 +
Boehm, B. 1981. ''Software Engineering Economics''. Upper Saddle River, NJ, USA: Prentice Hall.
  
<blockquote>''Economists divide all economic activity into two broad categories, goods and services. Goods-producing industries are agriculture, mining, manufacturing, and construction; each of them creates some kind of tangible object. Service industries include everything else: banking, communications, wholesale and retail trade, all professional services such as engineering, computer software development, and medicine, nonprofit economic activity, all consumer services, and all government services, including defense and administration of justice....'' (Encyclopedia Britannica 2011).</blockquote> 
+
Boehm, B. 1989. ''Software Risk Management''. Los Alamitos, CA; Tokyo, Japan: IEEE Computer Society Press: 115-125.
  
The following diagram defines some terms related to an engineered system life cycle and the development of goods (products) and services.
+
Canada, J.R. 1971. ''Intermediate Economic Analysis for Management and Engineering''. Upper Saddle River, NJ, USA: Prentice Hall.
  
[[File:Part2 ModifiedCapabilityEngineering 201905.png|thumb|center|800px|<center>'''Figure 2: Life Cycle Terminology (Modified from Capability Engineering – an Analysis of Perspectives (modified from (Henshaw et al, 2011), used with permission))'''</center>]]
+
Carr, M., S. Konda, I. Monarch, F. Ulrich, and C. Walker. 1993. ''Taxonomy-based risk identification''. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie-Mellon University (CMU), CMU/SEI-93-TR-6.
  
In the above figure the {{Term|Capability  (glossary)}} needed to enable an {{Term|Enterprise (glossary)}} to achieve its goals is delivered by the synchronized use of {{Term|Service (glossary)|services}}. Those services are provided by {{Term|Service System (glossary)}} which are created, sustained and deployed by one or more {{Term|Organization (glossary)|organisations}}. A service system is composed from people, technology, information, and access to related services and other necessary resources. Some of these resources are provided by enabling services and the technological elements may be developed and supplied as {{Term|Product System (glossary)|product systems}}. An {{Term|Enterprise System (glossary)}} describes a collection of related capabilities and associated services which together enable the achievement of the overall purpose of an enterprise as a government, business or societal entity. Measurement and review of enterprise goals may define needs for change which require an organisation to acquire or modify, and integrate the elements needed to evolve its service systems.  The general terminology above is described briefly in the associated glossary definitions and expanded in related articles in Part 4: [[Applications of Systems Engineering]].
+
Charette, R., L. Dwinnell, and J. McGarry. 2004. "Understanding the roots of process performance failure." ''CROSSTALK: The Journal of Defense Software Engineering'' (August 2004): 18-24.
  
==Engineered System Context==
+
Clemen, R., and T. Reilly. 2001. ''Making hard decisions''. Boston, MA, USA: Duxbury.
  
Engineered systems are developed as combinations of products and services within a life cycle.  The figure below gives a general view of the full context for any potential application of a SE life cycle.
+
Conrow, E. 2003. ''[[Effective Risk Management: Some Keys to Success]],'' 2nd ed. Reston, VA, USA: American Institute of Aeronautics and Astronautics (AIAA).
  
 +
Conrow, E. 2008. "Risk analysis for space systems." Paper presented at Space Systems Engineering and Risk Management Symposium, 27-29 February, 2008, Los Angeles, CA, USA. 
  
[[File:Part2 ServiceSystem 201905.png|thumb|center|550px|'''Figure 3: General Engineered System Context (SEBoK original)''']]
+
Conrow, E. and P. Shishido. 1997. "Implementing risk management on software intensive projects." IEEE ''Software.'' 14(3) (May/June 1997): 83-9.
  
 +
DAU. 2003a. ''Risk Management Guide for DoD Acquisition: Fifth Edition,'' version 2. Ft. Belvoir, VA, USA: Defense Acquisition University (DAU) Press.
  
In this view a service system related directly to a capability need sets the overall boundary. This need establishes the problem situation or opportunity which encapsulates the starting point of any life cycle. Within this service system are the related services, products and people (or intelligent software agents) needed to fully deliver a solution to that need. The environment includes any people, organisations, rules or conditions which influence or constrain the service system or the things within it. The SoI for a particular SE life cycle may be defined at any level of this general context. While the focus of the context will vary for each life cycle it is important that some version of this general context is considered for all SE life cycles, to help maintain a holistic view of problem and solution. This is discussed in [[Types of Systems]].
+
DAU. 2003b. ''U.S. Department of Defense extension to: A guide to the project management body of knowledge (PMBOK(R) guide), first edition''. Version 1. 1st ed. Ft. Belvoir, VA, USA: Defense Acquisition University (DAU) Press.
  
An engineered system context describes the context for a SoI so that the necessary understanding can be reached and the right systems engineering decisions can be made across the life of that SoI. This will require a number of different views of the context across a SE life cycle, both to identify all external influence on the SoI and to guide and constraint the systems engineering of the elements of the SoI. A full engineered systems context will include the problem situation from which a need for a SoI is identified, one or more socio technical solutions, the organizations needed to create and sustain new solutions and the operational environment within which those solutions must be integrated, used and eventually disposed of. The kinds of views which can be used to represent a SoI context over its life and how those views can be combined into models is discussed in the [[Representing Systems with Models]] KA in Part 2. The activities which use those models are described conceptually in the [[Systems Approach Applied to Engineered Systems]] KA in part 2 and related to more formal SE life cycle processes in Part 3.
+
DoD. 2015. ''Risk, Issue, and Opportunity Management Guide for Defense Acquisition Programs''. Washington, DC, USA: Office of the Deputy Assistant Secretary of Defense for Systems Engineering/Department of Defense.
  
----
+
Evans, M., N. Hastings, and B. Peacock. 2000. ''Statistical Distributions,'' 3rd ed. New York, NY, USA: Wiley-Interscience.
<center>'''''by Janet Singer, Duane Hybertson, and Rick Adcock'''''</center>
 
----
 
  
==References==
+
Forbes, C., M. Evans, N. Hastings, and B. Peacock. 2011. “Statistical Distributions,” 4th ed. New York, NY, USA.
  
===Works Cited===
+
Gallagher, B., P. Case, R. Creel, S. Kushner, and R. Williams. 2005. ''A taxonomy of operational risk''. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie-Mellon University (CMU), CMU/SEI-2005-TN-036.
  
Bertalanffy, L. von. 1968. ''General System Theory: Foundations, Development, Applications'', rev. ed. New York: Braziller.
+
Gluch, P. 1994. ''A Construct for Describing Software Development Risks''. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie-Mellon University (CMU), CMU/SEI-94-TR-14.
  
Boardman, J. and B. Sauser. 2008. ''Systems Thinking: Coping with 21st Century Problems.'' Boca Raton, FL, USA: Taylor & Francis.
+
ISO/IEC/IEEE. 2015. ''Systems and Software Engineering -- System Life Cycle Processes''. Geneva, Switzerland: International Organisation for Standardisation / International Electrotechnical Commissions / Institute of Electrical and Electronics Engineers. ISO/IEC/IEEE 15288:2015.
  
Checkland, P. 1999. ''Systems Thinking, Systems Practice.'' New York, NY, USA: Wiley and Sons, Inc.
+
Kerzner, H. 2009. ''Project Management: A Systems Approach to Planning, Scheduling, and Controlling.'' 10th ed. Hoboken, NJ, USA: John Wiley & Sons.
  
Dori, D. 2002. ''Object-Process Methodology – A Holistic Systems Paradigm''. Verlag, Berlin, Heidelberg, New York: Springer.
+
Kahneman, D., and A. Tversky. 1979.  "Prospect theory: An analysis of decision under risk." ''Econometrica.'' 47(2) (Mar., 1979): 263-292.
  
Henshaw, M., D. Kemp, P. Lister, A. Daw, A. Harding, A. Farncombe, and M. Touchin. 2011. "[[Capability Engineering - An Analysis of Perspectives]]." Presented at International Council on Systems Engineering (INCOSE) 21st International Symposium, June 20-23, 2011, Denver, CO, USA.
+
Kumamoto, H. and E. Henley. 1996. ''Probabilistic Risk Assessment and Management for Engineers and Scientists,'' 2nd ed. Piscataway, NJ, USA: Institute of Electrical and Electronics Engineers (IEEE) Press.
  
Hitchins, D. 2009. “What Are the General Principles Applicable to Systems?” INCOSE ''Insight'', 12(4): 59-63.
+
Law, A. 2007. ''Simulation Modeling and Analysis,'' 4th ed. New York, NY, USA: McGraw Hill.
  
INCOSE. 2015. ''Systems Engineering Handbook: A Guide for System Life Cycle Processes and Activities'', version 4.0. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2.2.
+
Mun, J. 2010. ''Modeling Risk,'' 2nd ed. Hoboken, NJ, USA: John Wiley & Sons.
  
Johnson, R.A., F.W. Kast, and J.E. Rosenzweig. 1963. ''The Theory and Management of Systems.'' New York, NY, USA: McGraw-Hill Book Company.
+
NASA. 2002. ''Probabilistic Risk Assessment Procedures Guide for NASA Managers and Practitioners,'' version 1.1. Washington, DC, USA: Office of Safety and Mission Assurance/National Aeronautics and Space Administration (NASA).
  
Koestler, A. 1990. ''The Ghost in the Machine,'' 1990 reprint ed. Penguin Group.  
+
PMI. 2013. ''[[A Guide to the Project Management Body of Knowledge|A Guide to the Project Management Body of Knowledge (PMBOK® Guide)]]'', 5th ed. Newtown Square, PA, USA: Project Management Institute (PMI).
  
Martin, J, 2004. "The Seven Samurai of Systems Engineering: Dealing with the Complexity of 7 Interrelated Systems". Proceedings of the 14th Annual International Council on Systems Engineering International Symposium, 20-24 June, 2004, Toulouse, France.  
+
Scheinin, W. 2008. "Start Early and Often: The Need for Persistent Risk Management in the Early Acquisition Phases." Paper presented at Space Systems Engineering and Risk Management Symposium, 27-29 February 2008, Los Angeles, CA, USA.
  
Miles, R.F. (ed). 1973. ''System Concepts.'' New York, NY, USA: Wiley and Sons, Inc.
+
SEI. 2010. ''[[Capability Maturity Model Integrated (CMMI) for Development]],'' version 1.3. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie Mellon University (CMU).
  
M’Pherson, P.K. 1974. "A perspective on systems science and systems philosophy".  ''Futures''. 6(3):219-39.
+
Vose, D. 2000. ''Quantitative Risk Analysis,'' 2nd ed. New York, NY, USA: John Wiley & Sons.
  
Simon, H.A. 1962. "The Architecture of Complexity." ''Proceedings of the American Philosophical Society.'' 106(6) (Dec. 12, 1962): 467-482.
+
Willis, H.H., A.R. Morral, T.K. Kelly, and J.J. Medby. 2005. ''Estimating Terrorism Risk''. Santa Monica, CA, USA: The RAND Corporation, MG-388.
  
 
===Primary References===
 
===Primary References===
 +
Boehm, B. 1981. ''[[Software Engineering Economics]].'' Upper Saddle River, NJ, USA:Prentice Hall.
 +
 +
Boehm, B. 1989. ''[[Software Risk Management]].'' Los Alamitos, CA; Tokyo, Japan: IEEE Computer Society Press, p. 115-125.
 +
 +
Conrow, E.H. 2003. ''[[Effective Risk Management: Some Keys to Success]],'' 2nd ed. Reston, VA, USA: American Institute of Aeronautics and Astronautics (AIAA).
  
Bertalanffy, L., von. 1968. ''[[General System Theory: Foundations, Development, Applications]]'', rev. ed. New York, NY, USA: Braziller.
+
DoD. 2015. [[Risk Management Guide for DoD Acquisition|Risk, Issue, and Opportunity Management Guide for Defense Acquisition Programs]]. Washington, DC, USA: Office of the Deputy Assistant Secretary of Defense for Systems Engineering/Department of Defense.
  
INCOSE. 2015. ''[[INCOSE Systems Engineering Handbook|Systems Engineering Handbook]]: A Guide for System Life Cycle Processes and Activities'', version 4.0. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2.2.
+
SEI. 2010. ''[[Capability Maturity Model Integrated (CMMI) for Development]],'' version 1.3. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie Mellon University (CMU).
  
 
===Additional References===
 
===Additional References===
 +
Canada, J.R. 1971. ''Intermediate Economic Analysis for Management and Engineering''. Upper Saddle River, NJ, USA: Prentice Hall.
 +
 +
Carr, M., S. Konda, I. Monarch, F. Ulrich, and C. Walker. 1993. ''Taxonomy-based risk identification''. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie-Mellon University (CMU), CMU/SEI-93-TR-6.
 +
 +
Charette, R. 1990. ''Application Strategies for Risk Management''. New York, NY, USA: McGraw-Hill.
 +
 +
Charette, R.  1989.  ''Software Engineering Risk Analysis and Management.'' New York, NY, USA: McGraw-Hill (MultiScience Press).
 +
 +
Charette, R., L. Dwinnell, and J. McGarry. 2004. "Understanding the roots of process performance failure." ''CROSSTALK: The Journal of Defense Software Engineering'' (August 2004): 18-24.
 +
 +
Clemen, R., and T. Reilly. 2001. ''Making hard decisions''. Boston, MA, USA: Duxbury.
 +
 +
Conrow, E. 2010. "Space program schedule change probability distributions." Paper presented at American Institute of Aeronautics and Astronautics (AIAA) Space 2010, 1 September 2010, Anaheim, CA, USA.
 +
 +
Conrow, E. 2009. "Tailoring risk management to increase effectiveness on your project." Presentation to the Project Management Institute, Los Angeles Chapter, 16 April, 2009, Los Angeles, CA.
  
Hybertson, Duane. 2009. ''Model-oriented Systems Engineering Science: A Unifying Framework for Traditional and Complex Systems''.  Boca Raton, FL, USA: CRC Press.
+
Conrow, E. 2008. "Risk analysis for space systems." Paper presented at Space Systems Engineering and Risk Management Symposium, 27-29 February, 2008, Los Angeles, CA, USA.
  
Hubka, Vladimir, and W. E. Eder. 1988. ''Theory of Technical Systems: A Total Concept Theory for Engineering Design''. Berlin: Springer-Verlag.
+
Conrow, E. and P. Shishido. 1997. "Implementing risk management on software intensive projects." IEEE ''Software.'' 14(3) (May/June 1997): 83-9.
  
Laszlo, E., ed. 1972. ''The Relevance of General Systems Theory: Papers Presented to Ludwig von Bertalanffy on His Seventieth Birthday.'' New York, NY, USA: George Brazillier.  
+
DAU. 2003a. ''Risk Management Guide for DoD Acquisition: Fifth Edition.'' Version 2.  Ft. Belvoir, VA, USA: Defense Acquisition University (DAU) Press.
  
<center>[[Systems Fundamentals|< Previous Article]] | [[Systems Fundamentals|Parent Article]] | [[Types of Systems|Next Article >]]</center>
+
DAU. 2003b. ''U.S. Department of Defense extension to: A guide to the project management body of knowledge (PMBOK(R) guide),'' 1st ed. Ft. Belvoir, VA, USA: Defense Acquisition University (DAU) Press.
 +
 
 +
Dorofee, A., J. Walker, C. Alberts, R. Higuera, R. Murphy, and R. Williams (eds). 1996. ''Continuous Risk Management Guidebook.'' Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie-Mellon University (CMU). 
 +
 
 +
Gallagher, B., P. Case, R. Creel, S. Kushner, and R. Williams. 2005. ''A taxonomy of operational risk''. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie-Mellon University (CMU), CMU/SEI-2005-TN-036.
 +
 
 +
Gluch, P. 1994. ''A Construct for Describing Software Development Risks''. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie-Mellon University (CMU), CMU/SEI-94-TR-14.
 +
 
 +
Haimes, Y.Y. 2009. ''Risk Modeling, Assessment, and Management''. Hoboken, NJ, USA: John Wiley & Sons, Inc. 
 +
 
 +
Hall, E. 1998.'' Managing Risk: Methods for Software Systems Development.'' New York, NY, USA: Addison Wesley Professional.
 +
 
 +
INCOSE. 2015. ''Systems Engineering Handbook: A Guide for System Life Cycle Processes and Activities,'' version 4. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2014-001-04.
 +
 
 +
ISO. 2009. ''Risk Management—Principles and Guidelines''. Geneva, Switzerland: International Organization for Standardization (ISO), ISO 31000:2009.
 +
 
 +
ISO/IEC. 2009. ''Risk Management—Risk Assessment Techniques''. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC), ISO/IEC 31010:2009.
 +
 
 +
ISO/IEC/IEEE. 2006. ''Systems and Software Engineering - Risk Management''. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC)/Institute of Electrical and Electronics Engineers (IEEE). ISO/IEC/IEEE 16085.
 +
 
 +
ISO. 2003. ''Space Systems - Risk Management.'' Geneva, Switzerland: International Organization for Standardization (ISO), ISO 17666:2003.
 +
 
 +
Jones, C. 1994. ''Assessment and Control of Software Risks.'' Upper Saddle River, NJ, USA: Prentice-Hall. 
 +
 
 +
Kahneman, D. and A. Tversky. 1979.  "Prospect theory: An analysis of decision under risk." ''Econometrica.'' 47(2) (Mar., 1979): 263-292.
 +
 
 +
Kerzner, H. 2009. ''Project Management: A Systems Approach to Planning, Scheduling, and Controlling,'' 10th ed. Hoboken, NJ: John Wiley & Sons. 
 +
 
 +
Kumamoto, H., and E. Henley. 1996.  ''Probabilistic Risk Assessment and Management for Engineers and Scientists,'' 2nd ed. Piscataway, NJ, USA: Institute of Electrical and Electronics Engineers (IEEE) Press.
 +
 
 +
Law, A. 2007. ''Simulation Modeling and Analysis,'' 4th ed. New York, NY, USA: McGraw Hill.
 +
 
 +
MITRE. 2012. ''Systems Engineering Guide to Risk Management.'' Available online: http://www.mitre.org/work/systems_engineering/guide/acquisition_systems_engineering/risk_management/.  Accessed on July 7, 2012.  Page last updated on May 8, 2012. 
 +
 
 +
Mun, J. 2010. ''Modeling Risk,'' 2nd ed. Hoboken, NJ, USA: John Wiley & Sons. 
 +
 
 +
NASA. 2002. ''Probabilistic Risk Assessment Procedures Guide for NASA Managers and Practitioners,'' version 1.1. Washington, DC, USA: Office of Safety and Mission Assurance/National Aeronautics and Space Administration (NASA).
 +
 
 +
PMI. 2013. ''[[A Guide to the Project Management Body of Knowledge|A Guide to the Project Management Body of Knowledge (PMBOK® Guide)]]'', 5th ed. Newtown Square, PA, USA: Project Management Institute (PMI).
 +
 
 +
Scheinin, W. 2008. "Start Early and Often: The Need for Persistent Risk Management in the Early Acquisition Phases." Paper presented at Space Systems Engineering and Risk Management Symposium, 27-29 February 2008, Los Angeles, CA, USA.
 +
 
 +
USAF. 2005. ''SMC systems engineering primer & handbook: Concepts, processes, and techniques,'' 3rd ed. Los Angeles, CA, USA: Space & Missile Systems Center/U.S. Air Force (USAF). 
 +
 
 +
USAF. 2014. ‘’SMC Risk Management Process Guide''. Version 2. Los Angeles, CA, USA: Space & Missile Systems Center/U.S. Air Force (USAF). 
 +
 
 +
Vose, D. 2000. ''Quantitative Risk Analysis.'' 2nd ed. New York, NY, USA: John Wiley & Sons.
 +
 
 +
Willis, H.H., A.R. Morral, T.K. Kelly, and J.J. Medby. 2005. ''Estimating Terrorism Risk''. Santa Monica, CA, USA: The RAND Corporation, MG-388.
 +
 
 +
----
 +
<center>[[Assessment and Control|< Previous Article]] | [[Systems Engineering Management|Parent Article]] | [[Measurement|Next Article >]]</center>
  
[[Category:Part 2]]
+
<center>'''SEBoK v. 2.0, released 1 June 2019'''</center>
[[Category:Topic]]
 
[[Category:Systems Fundamentals]]
 
  
<center>'''SEBoK v. 1.9.1, released 12 October 2018'''</center>
+
[[Category: Part 3]][[Category:Topic]]
 +
[[Category:Systems Engineering Management]]

Revision as of 02:59, 19 October 2019

The purpose of risk managementrisk management is to reduce potential risksrisks to an acceptable level before they occur, throughout the life of the product or project. Risk management is a continuous, forward-looking process that is applied to anticipate and avert risks that may adversely impact the project, and can be considered both a project managementproject management and a systems engineeringsystems engineering process. A balance must be achieved on each project in terms of overall risk management ownership, implementation, and day-to-day responsibility between these two top-level processes.

For the SEBoK, risk management falls under the umbrella of Systems Engineering Management, though the wider body of risk literature is explored below.

Risk Management Process Overview

Risk is a measure of the potential inability to achieve overall program objectives within defined cost, schedule, and technical constraints. It has the following two components (DAU 2003a):

  1. the probability (or likelihood) of failing to achieve a particular outcome
  2. the consequences (or impact) of failing to achieve that outcome

In the domain of catastrophic risk analysis, risk has three components: (1) threat, (2) vulnerability, and (3) consequence (Willis et al. 2005).

Risk management involves defining a risk management strategy, identifying and analyzing risks, handling selected risks, and monitoring the progress in reducing risks to an acceptable level (SEI 2010; DoD 2015; DAU 2003a; DAU 2003b; PMI 2013) (OpportunityOpportunity and opportunity management is briefly discussed below).

The SE risk management process includes the following activities:

  • risk planning
  • risk identification
  • risk analysis
  • risk handling
  • risk monitoring

ISO/IEC/IEEE 16085 provides a detailed set of risk management activities and tasks which can be utilized in a risk management process aligned with ISO 31000:2009, Risk management — Principles and Guidelines, and ISO Guide 73:2009,

Risk management — Vocabulary. ISO 9001:2008 standard provides risk-based preventive action requirements in subclause 8.5.3.

The Risk Management Process section of the INCOSE Systems Engineering Handbook: A Guide for Systems Life Cycle Processes and Activities, 4th Edition, provides a comprehensive overview of risk management which is intended to be consistent with the Risk Management Process section of ISO 15288.

Risk Planning

Risk planning establishes and maintains a strategy for identifying, analyzing, handling, and monitoring risks within the project. The strategy, both the process and its implementation, isprocess and implementation of the strategy are documented in a risk management plan (RMP).

The risk management process and its implementation should be tailored to each project and updated as appropriate throughout the life of the project. The RMP should be transmitted in an appropriate means to the project team and key stakeholders.

The risk management strategy includes as necessary the risk management process of all supply chain suppliers and describes how risks from all suppliers will be raised to the next level(s) for incorporation in the project risk process.

The context of the Risk Management process should include a description of stakeholders’ perspectives, risk categories, and a description (perhaps by reference) of the technical and managerial objectives, assumptions and constraints. The risk categories include the relevant technical areas of the system and facilitate identification of risks across the life cycle of the system. As noted in ISO 31000, the aim of this step is to generate a comprehensive list of risks based on those events that might create, enhance, prevent, degrade, accelerate or delay the achievement of objectives.

The RMP should contain key risk management information; Conrow (2003) identifies the following as key components of RMP:

  • a project summary
  • project acquisition and contracting strategies
  • key definitions
  • a list of key documents
  • process steps
  • inputs, tools and techniques, and outputs per process step
  • linkages between risk management and other project processes
  • key ground rules and assumptions
  • risk categories
  • buyer and seller roles and responsibilities
  • organizational and personnel roles and responsibilities

Generally, the level of detail in an RMP is risk-driven, with simple plans for low risk projects and detailed plans for high risk projects.

Risk Identification

Risk identification is the process of examining the project products, processes, and requirements to identify and document candidate risks. Risk identification should be performed continuously at the individual level as well as through formerly structured events at both regular intervals and following major program changes (e.g., project initiation, re-baselining, change in acquisition phase, etc.).

Conrow (2009) states that systems engineers should use one or more top-level approaches (e.g., work breakdown structure (WBS), key processes evaluation, key requirements evaluation, etc.) and one or more lower-level approaches (e.g., affinity, brainstorming, checklists and taxonomies, examining critical path activities, expert judgment, Ishikawa diagrams, etc.) in risk identification. For example, lower-level checklists and taxonomies exist for software risk identification (Conrow and Shishido 1997, 83-89, p. 84; Boehm 1989, 115-125, Carr et al. 1993, p. A-2) and operational risk identification (Gallagher et al. 2005, p. 4), and have been used on a wide variety of programs. The top and lower-level approaches are essential but there is no single accepted method — all approaches should be examined and used as appropriate.

Candidate risk documentation should include the following items where possible, as identified by Conrow (2003 p.198):

  • risk title
  • structured risk description
  • applicable risk categories
  • potential root causes
  • relevant historical information
  • responsible individual and manager

It is important to use structured risk descriptions such as an if-then format: if (an event occurs--trigger), then (an outcome or aeffect occurs). Another useful construct is a condition (that exists) that leads to a potential consequence (outcome) (Gluch 1994). These approaches help the analyst to better think through the potential nature of the risk.

Risk analysis and risk handling activities should only be performed on approved risks to ensure the best use of scarce resources and maintain focus on the correct risks.

Risk Analysis

Risk analysis is the process of systematically evaluating each identified, approved risk to estimate the probability of occurrence (likelihood) and consequence of occurrence (impact), and then converting the results to a corresponding risk level or rating.

There is no best analysis approach for a given risk category. Risk scales and a corresponding matrix, simulations, and probabilistic risk assessments are often used for technical risks, while decision trees, simulations and payoff matrices are used for cost risk; and simulations are used for schedule risk. Risk analysis approaches are sometimes grouped into qualitative and quantitative methods. A structured, repeatable methodology should be used in order to increase analysis accuracy and reduce uncertainty over time.

The most common qualitative method (typically) uses ordinal probability and consequence scales coupled with a risk matrix (also known as a risk cube or mapping matrix) to convert the resulting values to a risk level. Here, one or more probability of occurrence scales, coupled with three consequences of occurrence scales (cost, performance, schedule) are typically used. Mathematical operations should not be performed on ordinal scale values to prevent erroneous results (Conrow 2003, p. 187-364).

Once the risk level for each risk is determined, the risks need to be prioritized. Prioritization is typically performed by risk level (e.g., low, medium, high), risk score (the pair of max (probability), max (consequence) values), and other considerations such as time-frame, frequency of occurrence, and interrelationship with other risks (Conrow 2003, pp. 187-364). An additional prioritization technique is to convert results into an estimated cost, performance, and schedule value (e.g., probability budget consequence). However, the result is only a point estimate and not a distribution of risk.

Widely used quantitative methods include decision trees and the associated expected monetary value analysis (Clemen and Reilly 2001), modeling and simulation (Law 2007; Mun 2010; Vose 2000), payoff matrices (Kerzner 2009, p. 747-751), probabilistic risk assessments (Kumamoto and Henley 1996; NASA 2002), and other techniques. Risk prioritization can directly result from the quantitative methods employed. For quantitative approaches, care is needed in developing the model structure, since the results will only be as good as the accuracy of the structure, coupled with the characteristics of probability estimates or distributions used to model the risks (Law 2007; Evans, Hastings, and Peacock 2011).

If multiple risk facets exist for a given item (e.g., cost risk, schedule risk, and technical risk) the different results should be integrated into a cohesive three-dimensional picture of risk. Sensitivity analyses can be applied to both qualitative and quantitative approaches in an attempt to understand how potential variability will affect results. Particular emphasis should be paid to compound risks (e.g., highly coupled technical risks with inadequate fixed budgets and schedules).

Risk Handling

Risk handling is the process that identifies and selects options and implements the desired option to reduce a risk to an acceptable level, given program constraints (budget, other resources) and objectives (DAU 2003a, 20-23, 70-78).

For a given system-of-interestsystem-of-interest (SoI), risk handling is primarily performed at two levels. At the system level, the overall ensemble of system risks is initially determined and prioritized and second-level draft risk element plans (REP's) are prepared for handling the risks. For more complex systems, it is important that the REP's at the higher SoI level are kept consistent with the system RMPs at the lower SoI level, and that the top-level RMP preserves continuing risk traceability across the SoI.

The risk handling strategy selected is the combination of the most desirable risk handling option coupled with a suitable implementation approach for that option (Conrow 2003). Risk handling options include assumption, avoidance, control (mitigation), and transfer. All four options should be evaluated and the best one chosen for each risk. An appropriate implementation approach is then chosen for that option. Hybrid strategies can be developed that include more than one risk handling option, but with a single implementation approach. Additional risk handling strategies can also be developed for a given risk and either implemented in parallel with the primary strategy or be made a contingent strategy that is implemented if a particular trigger event occurs during the execution of the primary strategy. Often, this choice is difficult because of uncertainties in the risk probabilities and impacts. In such cases, buying information to reduce risk uncertainty via prototypes, benchmarking, surveying, modeling, etc. will clarify risk handling decisions (Boehm 1981).

Risk Handling Plans

A risk handling plan (RHP - a REP at the system level), should be developed and implemented for all high and medium risks and selected low risks as warranted.

As identified by Conrow (2003, 365-387), each RHP should include:

  • a risk owner and management contacts
  • selected option
  • implementation approach
  • estimated probability and consequence of occurrence levels at the start and conclusion of each activity
  • specific measurable exit criteria for each activity
  • appropriate metrics
  • resources needed to implement the RHP

Metrics included in each RHP should provide an objective means of determining whether the risk handling strategy is on track and whether it needs to be updated. On larger projects these can include earned value, variation in schedule and technical performance measures (TPMs), and changes in risk level vs. time.

The activities present in each RHP should be integrated into the project’s integrated master schedule or equivalent; otherwise there will be ineffective risk monitoring and controlthe risk monitoring and control will be ineffective.

Risk Monitoring

Risk monitoring is used to evaluate the effectiveness of risk handling activities against established metrics and provide feedback to the other risk management process steps. Risk monitoring results may also provide a basis to update RHPs, develop additional risk handling options and approaches, and re-analyze risks. In some cases, monitoring results may also be used to identify new risks, revise an existing risk with a new facet, or revise some aspects of risk planning (DAU 2003a, p. 20). Some risk monitoring approaches that can be applied include earned value, program metrics, TPMs, schedule analysis, and variations in risk level. Risk monitoring approaches should be updated and evaluated at the same time and WBS level;, otherwise, the results may be inconsistent.

Opportunity and Opportunity Management

In principle, opportunity management is the duality to risk management, with two components: (1) probability of achieving an improved outcome and (2) impact of achieving the outcome. Thus, both should be addressed in risk management planning and execution. In practice, however, a positive opportunity exposure will not match a negative risk exposure in utility space, since the positive utility magnitude of improving an expected outcome is considerably less than the negative utility magnitude of failing to meet an expected outcome (Canada 1971; Kahneman-Tversky 1979). Further, since many opportunity -management initiatives have failed to anticipate serious side effects, all candidate opportunities should be thoroughly evaluated for potential risks to prevent unintended consequences from occurring.

In addition, while opportunities may provide potential benefits for the system or project, each opportunity pursued may have associated risks that detract from the expected benefit. This may reduce the ability to achieve the anticipated effects of the opportunity, in addition to any limitations associated with not pursing an opportunity.

Linkages to Other Systems Engineering Management Topics

The measurement process provides indicators for risk analysis. Project planning involves the identification of risk and planning for stakeholder involvement. Project assessment and control monitors project risks. Decision management evaluates alternatives for selection and handling of identified and analyzed risks.

Practical Considerations

Key pitfalls and good practices related to systems engineering risk management are described in the next two sections.

Pitfalls

Some of the key pitfalls encountered in performing risk management are below in Table 1.

Table 1. Risk Management Pitfalls. (SEBoK Original)
Name Description
Process Over-Reliance
  • Over-reliance on the process side of risk management without sufficient attention to human and organizational behavioral considerations.
Lack of Continuity
  • Failure to implement risk management as a continuous process. Risk management will be ineffective if it’s done just to satisfy project reviews or other discrete criteria. (Charette, Dwinnell, and McGarry 2004, 18-24 and Scheinin 2008).
Tool and Technique Over-Reliance
  • Over-reliance on tools and techniques, with insufficient thought and resources expended on how the process will be implemented and run on a day-to-day basis.
Lack of Vigilance
  • A comprehensive risk identification will generally not capture all risks; some risks will always escape detection, which reinforces the need for risk identification to be performed continuously.
Automatic Mitigation Selection
  • Automatically select the risk handling mitigation option, rather than evaluating all four options in an unbiased fashion and choosing the “best” option.
Sea of Green
  • Tracking progress of the risk handling plan, while the plan itself may not adequately include steps to reduce the risk to an acceptable level. Progress indicators may appear “green” (acceptable) associated with the risk handling plan: budgeting, staffing, organizing, data gathering, model preparation, etc. However, the risk itself may be largely unaffected if the handling strategy and the resulting plan are poorly developed, do not address potential root cause(s), and do not incorporate actions that will effectively resolve the risk.
Band-Aid Risk Handling
  • Handling risks (e.g., interoperability problems with changes in external systems) by patching each instance, rather than addressing the root cause(s) and reducing the likelihood of future instances.

Good Practices

Some good practices gathered from the references are below in Table 2.

Table 2. Risk Management Good Practices. (SEBoK Original)
Name Description
Top Down and Bottom Up
  • Risk management should be both “top down” and “bottom up” in order to be effective. The project manager or deputy need to own the process at the top level, but risk management principles should be considered and used by all project personnel.
Early Planning
  • Include the planning process step in the risk management process. Failure to adequately perform risk planning early in the project phase contributes to ineffective risk management.
Risk Analysis Limitations
  • Understand the limitations of risk analysis tools and techniques. Risk analysis results should be challenged because considerable input uncertainty and/or potential errors may exist.
Robust Risk Handling Strategy
  • The risk handling strategy should attempt to reduce both the probability and consequence of occurrence terms. It is also imperative that the resources needed to properly implement the chosen strategy be available in a timely manner, else the risk handling strategy, and the entire risk management process, will be viewed as a “paper tiger.”
Structured Risk Monitoring
  • Risk monitoring should be a structured approach to compare actual vs. anticipated cost, performance, schedule, and risk outcomes associated with implementing the RHP. When ad-hoc or unstructured approaches are used, or when risk level vs. time is the only metric tracked, the resulting risk monitoring usefulness can be greatly reduced.
Update Risk Database
  • The risk management database (registry) should be updated throughout the course of the program, striking a balance between excessive resources required and insufficient updates performed. Database updates should occur at both a tailored, regular interval and following major program changes.

References

Works Cited

Boehm, B. 1981. Software Engineering Economics. Upper Saddle River, NJ, USA: Prentice Hall.

Boehm, B. 1989. Software Risk Management. Los Alamitos, CA; Tokyo, Japan: IEEE Computer Society Press: 115-125.

Canada, J.R. 1971. Intermediate Economic Analysis for Management and Engineering. Upper Saddle River, NJ, USA: Prentice Hall.

Carr, M., S. Konda, I. Monarch, F. Ulrich, and C. Walker. 1993. Taxonomy-based risk identification. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie-Mellon University (CMU), CMU/SEI-93-TR-6.

Charette, R., L. Dwinnell, and J. McGarry. 2004. "Understanding the roots of process performance failure." CROSSTALK: The Journal of Defense Software Engineering (August 2004): 18-24.

Clemen, R., and T. Reilly. 2001. Making hard decisions. Boston, MA, USA: Duxbury.

Conrow, E. 2003. Effective Risk Management: Some Keys to Success, 2nd ed. Reston, VA, USA: American Institute of Aeronautics and Astronautics (AIAA).

Conrow, E. 2008. "Risk analysis for space systems." Paper presented at Space Systems Engineering and Risk Management Symposium, 27-29 February, 2008, Los Angeles, CA, USA.

Conrow, E. and P. Shishido. 1997. "Implementing risk management on software intensive projects." IEEE Software. 14(3) (May/June 1997): 83-9.

DAU. 2003a. Risk Management Guide for DoD Acquisition: Fifth Edition, version 2. Ft. Belvoir, VA, USA: Defense Acquisition University (DAU) Press.

DAU. 2003b. U.S. Department of Defense extension to: A guide to the project management body of knowledge (PMBOK(R) guide), first edition. Version 1. 1st ed. Ft. Belvoir, VA, USA: Defense Acquisition University (DAU) Press.

DoD. 2015. Risk, Issue, and Opportunity Management Guide for Defense Acquisition Programs. Washington, DC, USA: Office of the Deputy Assistant Secretary of Defense for Systems Engineering/Department of Defense.

Evans, M., N. Hastings, and B. Peacock. 2000. Statistical Distributions, 3rd ed. New York, NY, USA: Wiley-Interscience.

Forbes, C., M. Evans, N. Hastings, and B. Peacock. 2011. “Statistical Distributions,” 4th ed. New York, NY, USA.

Gallagher, B., P. Case, R. Creel, S. Kushner, and R. Williams. 2005. A taxonomy of operational risk. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie-Mellon University (CMU), CMU/SEI-2005-TN-036.

Gluch, P. 1994. A Construct for Describing Software Development Risks. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie-Mellon University (CMU), CMU/SEI-94-TR-14.

ISO/IEC/IEEE. 2015. Systems and Software Engineering -- System Life Cycle Processes. Geneva, Switzerland: International Organisation for Standardisation / International Electrotechnical Commissions / Institute of Electrical and Electronics Engineers. ISO/IEC/IEEE 15288:2015.

Kerzner, H. 2009. Project Management: A Systems Approach to Planning, Scheduling, and Controlling. 10th ed. Hoboken, NJ, USA: John Wiley & Sons.

Kahneman, D., and A. Tversky. 1979. "Prospect theory: An analysis of decision under risk." Econometrica. 47(2) (Mar., 1979): 263-292.

Kumamoto, H. and E. Henley. 1996. Probabilistic Risk Assessment and Management for Engineers and Scientists, 2nd ed. Piscataway, NJ, USA: Institute of Electrical and Electronics Engineers (IEEE) Press.

Law, A. 2007. Simulation Modeling and Analysis, 4th ed. New York, NY, USA: McGraw Hill.

Mun, J. 2010. Modeling Risk, 2nd ed. Hoboken, NJ, USA: John Wiley & Sons.

NASA. 2002. Probabilistic Risk Assessment Procedures Guide for NASA Managers and Practitioners, version 1.1. Washington, DC, USA: Office of Safety and Mission Assurance/National Aeronautics and Space Administration (NASA).

PMI. 2013. A Guide to the Project Management Body of Knowledge (PMBOK® Guide), 5th ed. Newtown Square, PA, USA: Project Management Institute (PMI).

Scheinin, W. 2008. "Start Early and Often: The Need for Persistent Risk Management in the Early Acquisition Phases." Paper presented at Space Systems Engineering and Risk Management Symposium, 27-29 February 2008, Los Angeles, CA, USA.

SEI. 2010. Capability Maturity Model Integrated (CMMI) for Development, version 1.3. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie Mellon University (CMU).

Vose, D. 2000. Quantitative Risk Analysis, 2nd ed. New York, NY, USA: John Wiley & Sons.

Willis, H.H., A.R. Morral, T.K. Kelly, and J.J. Medby. 2005. Estimating Terrorism Risk. Santa Monica, CA, USA: The RAND Corporation, MG-388.

Primary References

Boehm, B. 1981. Software Engineering Economics. Upper Saddle River, NJ, USA:Prentice Hall.

Boehm, B. 1989. Software Risk Management. Los Alamitos, CA; Tokyo, Japan: IEEE Computer Society Press, p. 115-125.

Conrow, E.H. 2003. Effective Risk Management: Some Keys to Success, 2nd ed. Reston, VA, USA: American Institute of Aeronautics and Astronautics (AIAA).

DoD. 2015. Risk, Issue, and Opportunity Management Guide for Defense Acquisition Programs. Washington, DC, USA: Office of the Deputy Assistant Secretary of Defense for Systems Engineering/Department of Defense.

SEI. 2010. Capability Maturity Model Integrated (CMMI) for Development, version 1.3. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie Mellon University (CMU).

Additional References

Canada, J.R. 1971. Intermediate Economic Analysis for Management and Engineering. Upper Saddle River, NJ, USA: Prentice Hall.

Carr, M., S. Konda, I. Monarch, F. Ulrich, and C. Walker. 1993. Taxonomy-based risk identification. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie-Mellon University (CMU), CMU/SEI-93-TR-6.

Charette, R. 1990. Application Strategies for Risk Management. New York, NY, USA: McGraw-Hill.

Charette, R. 1989. Software Engineering Risk Analysis and Management. New York, NY, USA: McGraw-Hill (MultiScience Press).

Charette, R., L. Dwinnell, and J. McGarry. 2004. "Understanding the roots of process performance failure." CROSSTALK: The Journal of Defense Software Engineering (August 2004): 18-24.

Clemen, R., and T. Reilly. 2001. Making hard decisions. Boston, MA, USA: Duxbury.

Conrow, E. 2010. "Space program schedule change probability distributions." Paper presented at American Institute of Aeronautics and Astronautics (AIAA) Space 2010, 1 September 2010, Anaheim, CA, USA.

Conrow, E. 2009. "Tailoring risk management to increase effectiveness on your project." Presentation to the Project Management Institute, Los Angeles Chapter, 16 April, 2009, Los Angeles, CA.

Conrow, E. 2008. "Risk analysis for space systems." Paper presented at Space Systems Engineering and Risk Management Symposium, 27-29 February, 2008, Los Angeles, CA, USA.

Conrow, E. and P. Shishido. 1997. "Implementing risk management on software intensive projects." IEEE Software. 14(3) (May/June 1997): 83-9.

DAU. 2003a. Risk Management Guide for DoD Acquisition: Fifth Edition. Version 2. Ft. Belvoir, VA, USA: Defense Acquisition University (DAU) Press.

DAU. 2003b. U.S. Department of Defense extension to: A guide to the project management body of knowledge (PMBOK(R) guide), 1st ed. Ft. Belvoir, VA, USA: Defense Acquisition University (DAU) Press.

Dorofee, A., J. Walker, C. Alberts, R. Higuera, R. Murphy, and R. Williams (eds). 1996. Continuous Risk Management Guidebook. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie-Mellon University (CMU).

Gallagher, B., P. Case, R. Creel, S. Kushner, and R. Williams. 2005. A taxonomy of operational risk. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie-Mellon University (CMU), CMU/SEI-2005-TN-036.

Gluch, P. 1994. A Construct for Describing Software Development Risks. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie-Mellon University (CMU), CMU/SEI-94-TR-14.

Haimes, Y.Y. 2009. Risk Modeling, Assessment, and Management. Hoboken, NJ, USA: John Wiley & Sons, Inc.

Hall, E. 1998. Managing Risk: Methods for Software Systems Development. New York, NY, USA: Addison Wesley Professional.

INCOSE. 2015. Systems Engineering Handbook: A Guide for System Life Cycle Processes and Activities, version 4. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2014-001-04.

ISO. 2009. Risk Management—Principles and Guidelines. Geneva, Switzerland: International Organization for Standardization (ISO), ISO 31000:2009.

ISO/IEC. 2009. Risk Management—Risk Assessment Techniques. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC), ISO/IEC 31010:2009.

ISO/IEC/IEEE. 2006. Systems and Software Engineering - Risk Management. Geneva, Switzerland: International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC)/Institute of Electrical and Electronics Engineers (IEEE). ISO/IEC/IEEE 16085.

ISO. 2003. Space Systems - Risk Management. Geneva, Switzerland: International Organization for Standardization (ISO), ISO 17666:2003.

Jones, C. 1994. Assessment and Control of Software Risks. Upper Saddle River, NJ, USA: Prentice-Hall.

Kahneman, D. and A. Tversky. 1979. "Prospect theory: An analysis of decision under risk." Econometrica. 47(2) (Mar., 1979): 263-292.

Kerzner, H. 2009. Project Management: A Systems Approach to Planning, Scheduling, and Controlling, 10th ed. Hoboken, NJ: John Wiley & Sons.

Kumamoto, H., and E. Henley. 1996. Probabilistic Risk Assessment and Management for Engineers and Scientists, 2nd ed. Piscataway, NJ, USA: Institute of Electrical and Electronics Engineers (IEEE) Press.

Law, A. 2007. Simulation Modeling and Analysis, 4th ed. New York, NY, USA: McGraw Hill.

MITRE. 2012. Systems Engineering Guide to Risk Management. Available online: http://www.mitre.org/work/systems_engineering/guide/acquisition_systems_engineering/risk_management/. Accessed on July 7, 2012. Page last updated on May 8, 2012.

Mun, J. 2010. Modeling Risk, 2nd ed. Hoboken, NJ, USA: John Wiley & Sons.

NASA. 2002. Probabilistic Risk Assessment Procedures Guide for NASA Managers and Practitioners, version 1.1. Washington, DC, USA: Office of Safety and Mission Assurance/National Aeronautics and Space Administration (NASA).

PMI. 2013. A Guide to the Project Management Body of Knowledge (PMBOK® Guide), 5th ed. Newtown Square, PA, USA: Project Management Institute (PMI).

Scheinin, W. 2008. "Start Early and Often: The Need for Persistent Risk Management in the Early Acquisition Phases." Paper presented at Space Systems Engineering and Risk Management Symposium, 27-29 February 2008, Los Angeles, CA, USA.

USAF. 2005. SMC systems engineering primer & handbook: Concepts, processes, and techniques, 3rd ed. Los Angeles, CA, USA: Space & Missile Systems Center/U.S. Air Force (USAF).

USAF. 2014. ‘’SMC Risk Management Process Guide. Version 2. Los Angeles, CA, USA: Space & Missile Systems Center/U.S. Air Force (USAF).

Vose, D. 2000. Quantitative Risk Analysis. 2nd ed. New York, NY, USA: John Wiley & Sons.

Willis, H.H., A.R. Morral, T.K. Kelly, and J.J. Medby. 2005. Estimating Terrorism Risk. Santa Monica, CA, USA: The RAND Corporation, MG-388.


< Previous Article | Parent Article | Next Article >
SEBoK v. 2.0, released 1 June 2019