Difference between pages "System Life Cycle Process Models: Iterative" and "Emergence"

From SEBoK
(Difference between pages)
Jump to: navigation, search
(Tech and grammar edits as discussed with Bkcase)
 
m (Text replacement - "<center>'''SEBoK v. 2.0, released 1 June 2019'''</center>" to "<center>'''SEBoK v. 2.1, released 31 October 2019'''</center>")
 
Line 1: Line 1:
There are a large number of [[Life Cycle Models|life cycle process models]]. As discussed in the [[System Life Cycle Process Drivers and Choices]] article, these models fall into three major categories: (1) primarily pre-specified and sequential processes; (2) primarily evolutionary and concurrent processes (e.g., the rational unified process and various forms of the Vee and spiral models); and (3) primarily interpersonal and unconstrained processes (e.g., agile development, Scrum, extreme programming (XP), dynamic system development methods, and innovation-based processes).
+
This topic forms part of the [[Systems Fundamentals]] knowledge area (KA). It gives the background to some of the ways in which {{Term|Emergence (glossary)|emergence}} has been described, as well as an indication of current thinking on what it is and how it influences {{Term|Systems Engineering (glossary)|systems engineering}} (SE) practice. It will discuss how these ideas relate to the general definitions of {{Term|System (glossary)|systems}} given in [[What is a System?]]; in particular, how they relate to different {{Term|Engineered System (glossary)|engineered system}} contexts. This topic is closely related to the [[Complexity|complexity]] topic that precedes it.
  
This article discusses incremental and evolutionary development models (the second and third categories listed above) beyond variants of the [[System Life Cycle Process Models: Vee|Vee model]]While there are a number of different models describing the project environment, the spiral model and the Vee Model have become the dominant approaches to visualizing the development process. Both the Vee and the spiral are useful models that emphasize different aspects of a system life cycle.  
+
Emergence is a consequence of the fundamental system {{Term|Concept (glossary)|concepts}} of {{Term|Holism (glossary)|holism}} and interaction (Hitchins 2007, 27)System wholes have {{Term|Behavior (glossary)|behaviors}} and properties arising from the organization of their {{Term|Element (glossary)|elements}} and their relationships, which only become apparent when the system is placed in different {{Term|Environment (glossary)|environments}}.
  
General implications of using iterative models for system design and development are discussed below. For a more specific understanding of how this life cycle model impacts systems engineering activities, please see the other knowledge areas (KAs) in Part 3. This article is focused on the use of iterative life cycle process models in systems engineering; however, because iterative process models are commonly used in software development, many of the examples below come from software projects. (See [[Systems Engineering and Software Engineering]] in [[Related Disciplines|Part 6]] for more information on life cycle implications in software engineering.)
+
Questions that arise from this definition include: What kinds of systems exhibit different kinds of emergence and under what conditions? Can emergence be predicted, and is it beneficial or detrimental to a system? How do we deal with emergence in the development and use of engineered systems? Can it be planned for? How?
  
==Incremental and Evolutionary Development ==
+
There are many varied and occasionally conflicting views on emergence. This topic presents the prevailing views and provides references for others.
===Overview of the Incremental Approach===
+
----
 
+
<center>'''''by Janet Singer, Duane Hybertson, and Rick Adcock'''''</center>
Incremental and iterative development (IID) methods have been in use since the 1960s (and perhaps earlier). They allow a project to provide an initial capability followed by successive deliveries to reach the desired {{Term|System-of-Interest (glossary)|system-of-interest}} (SoI). 
+
----
 
+
==Overview of Emergence==
The IID approach, shown in Figure 1, is used when:
+
As defined by Checkland, {{Term|Emergence (glossary)|emergence}} is “the principle that entities exhibit properties which are meaningful only when attributed to the whole, not to its parts.” (Checkland 1999, 314). Emergent system {{Term|Behavior (glossary)|behavior}} can be viewed as a consequence of the interactions and relationships between {{Term|System Element (glossary)|system elements}} rather than the behavior of individual elements. It emerges from a combination of the behavior and properties of the system elements and the systems structure or allowable interactions between the elements, and may be triggered or influenced by a stimulus from the systems environment.   
 
 
*rapid exploration and implementation of part of the system is desired;
 
*the requirements are unclear from the beginning;
 
*funding is constrained;
 
*the customer wishes to hold the SoI open to the possibility of inserting new technology at a later time; and/or
 
*experimentation is required to develop successive {{Term|prototype (glossary)|prototype (glossary)}} versions.
 
 
 
The attributes that distinguish IID from the single-pass, plan-driven approach are velocity and adaptability. 
 
 
[[File:KF_IncrementalDevelopment_Multiple.png|frame|center|600px|'''Figure 1. Incremental Development with Multiple Deliveries (Forsberg, Mooz, and Cotterman 2005).''' Reprinted with permission of John Wiley & Sons Inc. All other rights are reserved by the copyright owner.]]
 
 
 
Incremental development may also be “plan-driven” in nature if the requirements are known early on in the life cycle. The development of the functionality is performed incrementally to allow for insertion of the latest technology or for potential changes in needs or requirements. IID also imposes constraints. The example shown in Figure 2 uses the increments to develop high-risk subsystems (or components) early, but the system cannot function until all increments are complete.
 
 
 
[[File:Incremental_Development_with_a_single_delivery.PNG|thumb|center|600px|'''Figure 2. Incremental Development with a Single Delivery (Forsberg, Mooz, Cotterman 2005).''' Reprinted with permission of John Wiley & Sons Inc. All other rights are reserved by the copyright owner.]]
 
 
 
===Overview of the Evolutionary Approach===
 
A specific IID methodology called evolutionary development is common in research and development (R&D) environments in both the government and commercial sector. Figure 3 illustrates this approach, which was used in the evolution of the high temperature tiles for the NASA Space Shuttle (Forsberg 1995). In the evolutionary approach, the end state of each phase of development is unknown, though the goal is for each phase to result in some sort of useful product. 
 
 
 
[[File:Evolutionary_Generic_Model.PNG|thumb|center|600px|'''Figure 3. Evolutionary Generic Model (Forsberg, Mooz, Cotterman 2005).''' Reprinted with permission of John Wiley & Sons, Inc. All other rights are reserved by the copyright owner.]]
 
 
 
The real-world development environment is complex and difficult to map because many different project cycles are underway simultaneously. Figure 4 shows the applied research era for the development of the space shuttle Orbiter and illustrates multi-levels of simultaneous development, trade-studies, and ultimately, implementation.
 
 
 
[[File:KF_EvolutionComponents_Orbiter.png|thumb|center|600px|'''Figure 4. Evolution of Components and Orbiter Subsystems (including space shuttle tiles) During Creation of a Large "Single-Pass" Project (Forsberg 1995).''' Reprinted with permission of Kevin Forsberg. All other rights are reserved by the copyright owner.]]
 
 
 
==Iterative Software Development Process Models==
 
 
 
Software is a flexible and malleable medium which facilitates iterative [[System Analysis|analysis]], [[System Definition|design]], [[System Realization|construction]], [[System Verification |verification]], and [[System Validation|validation]] to a greater degree than is usually possible for the purely physical components of a system. Each repetition of an iterative development model adds material (code) to the growing software base; the expanded code base is tested, reworked as necessary, and demonstrated to satisfy the requirements for the baseline.
 
 
 
Process models for software development support iterative development on cycles of various lengths. Table 1 lists three iterative software development models which are presented in more detail below, as well as the aspects of software development that are emphasized by those models.
 
 
 
<center>
 
{|
 
|+'''Table 1. Primary Emphases of Three Iterative Software Development Models.'''  
 
(SEBoK Original)
 
!Iterative Model
 
!Emphasis
 
|-
 
|'''Incremental-build'''
 
|Iterative implementation-verification-validations-demonstration cycles
 
|-
 
|'''Spiral'''
 
|Iterative risk-based analysis of alternative approaches and evaluation of outcomes
 
|-
 
|'''Agile'''
 
|Iterative evolution of requirements and code
 
|}
 
</center>
 
 
 
Please note that the information below is focused specifically on the utilization of different life cycle models for software systems. In order to better understand the interactions between software engineering (SwE) and systems engineering (SE), please see the [[Systems Engineering and Software Engineering]] KA in [[Related Disciplines|Part 6]].
 
 
 
===Overview of Iterative-Development Process Models===
 
Developing and modifying software involves creative processes that are subject to many external and changeable forces. Long experience has shown that it is impossible to “get it right” the first time, and that iterative development processes are preferable to linear, sequential development process models, such as the well-known Waterfall model. In iterative development, each cycle of the iteration subsumes the software of the previous iteration and adds new capabilities to the evolving product to create an expanded version of the software.  Iterative development processes provide the following advantages:
 
 
 
*Continuous integration, verification, and validation of the evolving product;
 
*Frequent demonstrations of progress;
 
*Early detection of defects;
 
*Early warning of process problems;
 
*Systematic incorporation of the inevitable rework that occurs in software development; and
 
*Early delivery of subset capabilities (if desired).
 
 
 
Iterative development takes many forms in SwE, including the following:
 
 
 
*An incremental-build process, which is used to produce periodic (typically weekly) builds of increasing product capabilities;
 
*Agile development, which is used to closely involve a prototypical customer in an iterative process that may repeat on a daily basis; and
 
*The spiral model, which is used to confront and mitigate risk factors encountered in developing the successive versions of a product.
 
 
 
==The Incremental-Build Model==
 
The incremental-build model is a build-test-demonstrated model of iterative cycles in which frequent demonstrations of progress, verification, and validation of work-to-date are emphasized. The model is based on stable requirements and a software architectural specification. Each build adds new capabilities to the incrementally growing product. The process ends when the final version is verified, validated, demonstrated, and accepted by the customer.
 
 
 
Table 2 lists some partitioning criteria for incremental development into incremental build units of (typically) one calendar week each. The increments and the number of developers available to work on the project determine the number of features that can be included in each incremental build. This, in turn, determines the overall schedule.
 
 
 
<center>
 
{|
 
|+'''Table 2.  Some partitioning criteria for incremental builds (Fairley 2009).''' Reprinted with permission of the IEEE Computer Society and John Wiley & Sons Inc. All other rights are reserved by the copyright owner.
 
!Kind of System
 
!Partitioning Criteria
 
|-
 
|Application package
 
|Priority of features
 
|-
 
|Safety-critical systems
 
|Safety features first; prioritized others follow
 
|-
 
|User-intensive systems
 
|User interface first; prioritized others follow
 
|-
 
|System software
 
|Kernel first; prioritized utilities follow
 
|}
 
</center>
 
 
 
Figure 5 illustrates the details of the build-verify-validate-demonstrate cycles in the incremental build process. Each build includes detailed design, coding, integration, review, and testing done by the developers. In cases where code is to be reused without modification, some or all of an incremental build may consist of review, integration, and testing of the base code augmented with the reused code.  It is important to note that development of an increment may result in reworking previous components developed for integration to fix defects.
 
 
 
[[File:KF_IncrementalBuildCycles.png|frame|center|600px|'''Figure 5. Incremental Build-Verify-Validate-Demonstrate Cycles (Fairley 2009).''' Reprinted with permission of the IEEE Computer Society and John Wiley & Sons Inc. All other rights are reserved by the copyright owner.]]
 
 
 
Incremental verification, validation, and demonstration, as illustrated in Figure 5, overcome two of the major problems of a waterfall approach by:
 
*exposing problems early so they can be corrected as they occur; and
 
*incorporating minor in-scope changes to requirements that occur as a result of incremental demonstrations in subsequent builds.
 
 
 
Figure 5 also illustrates that it may be possible to overlap successive builds of the product. It may be possible, for example, to start a detailed design of the next version while the present version is being validated.
 
 
 
Three factors determine the degree of overlap that can be achieved:
 
 
 
#Availability of personnel;
 
#Adequate progress on the previous version; and
 
#The risk of significant rework on the next overlapped build because of changes to the previous in-progress build.
 
 
 
The incremental build process generally works well with small teams, but can be scaled up for larger projects.
 
 
 
A significant advantage of an incremental build process is that features built first are verified, validated, and demonstrated most frequently because subsequent builds incorporate the features of the earlier iterations. In building the software to control a nuclear reactor, for example, the emergency shutdown software could be built first, as it would then be verified and validated in conjunction with the features of each successive build.
 
 
 
In summary, the incremental build model, like all iterative models, provides the advantages of continuous integration and validation of the evolving product, frequent demonstrations of progress, early warning of problems, early delivery of subset capabilities, and systematic incorporation of the inevitable rework that occurs in software development.
 
 
 
===The Role of Prototyping in Software Development===
 
In SwE, a {{Term|Prototype (glossary)|prototype}} is a mock-up of the desired functionality of some part of the system. This is in contrast to physical systems, where a {{Term|prototype (glossary)|prototype}} is usually the first fully functional version of a system (Fairley 2009, 74).
 
 
 
In the past, incorporating prototype software into production systems has created many problems.  Prototyping is a useful technique that should be employed as appropriate; however, prototyping is ''not'' a process model for software development. When building a software prototype, the knowledge gained through the development of the prototype is beneficial to the program; however, the prototype code may not be used in the deliverable version of the system. In many cases, it is more efficient and more effective to build the production code from scratch using the knowledge gained by prototyping than to re-engineer the existing code.
 
 
 
===Life Cycle Sustainment of Software===
 
Software, like all systems, requires sustainment efforts to enhance capabilities, adapt to new environments, and correct defects. The primary distinction for software is that sustainment efforts change the software; unlike physical entities, software components do not have to be replaced because of physical wear and tear. Changing the software requires {{Term|verification (glossary)|re-verification}} and {{Term|validation (glossary)|re-validation}}, which may involve extensive regression testing to determine that the change has the desired effect and has not altered other aspects of functionality or behavior.
 
 
 
===Retirement of Software===
 
Useful software is rarely retired; however, software that is useful often experiences many upgrades during its lifetime. A later version may bear little resemblance to the initial release. In some cases, software that ran in a former operational environment is executed on hardware emulators that provide a virtual machine on newer hardware. In other cases, a major enhancement may replace and rename an older version of the software, but the enhanced version provides all of the capabilities of the previous software in a compatible manner. Sometimes, however, a newer version of software may fail to provide compatibility with the older version, which necessitates other changes to a system.
 
 
 
==Primarily Evolutionary and Concurrent Processes: The Incremental Commitment Spiral Model==
 
===Overview of the Incremental Commitment Spiral Model===
 
 
 
A view of the Incremental Commitment Spiral Model (ICSM) is shown in Figure 6.
 
 
 
[[File:KF_IncrementalCommitmentSpiral.png|thumb|center|900px|'''Figure 6. The Incremental Commitment Spiral Model (ICSM) (Pew and Mavor 2007).''' Reprinted with permission by the National Academy of Sciences, Courtesy of National Academies Press, Washington, D.C. All other rights are reserved by the copyright owner.]]
 
 
In the ICSM, each spiral addresses requirements and solutions concurrently, rather than sequentially, as well as products and processes, hardware, software, human factors aspects, and business case analyses of alternative product configurations or product line investments. The stakeholders consider the risks and risk mitigation plans and decide on a course of action. If the risks are acceptable and covered by risk mitigation plans, the project proceeds into the next spiral.
 
 
 
The development spirals after the first development commitment review follow the three-team incremental development approach for achieving both agility and assurance shown and discussed in Figure 2, "Evolutionary-Concurrent Rapid Change Handling and High Assurance" of [[System Life Cycle Process Drivers and Choices]].
 
 
 
===Other Views of the Incremental Commitment Spiral Model===
 
Figure 7 presents an updated view of the ICSM life cycle process recommended in the National Research Council ''Human-System Integration in the System Development Process'' study (Pew and Mavor 2007). It was called the Incremental Commitment Model (ICM) in the study. The ICSM builds on the strengths of current process models, such as early verification and validation concepts in the [[System Life Cycle Process Models: Vee|Vee model]], concurrency concepts in the concurrent engineering model, lighter-weight concepts in the agile and lean models, risk-driven concepts in the spiral model, the phases and anchor points in the rational unified process (RUP) (Kruchten 1999; Boehm 1996), and recent extensions of the spiral model to address systems of systems (SoS) capability acquisition (Boehm and Lane 2007).
 
 
 
[[File:KF_Phase_GenericIncremental.png|thumb|center|900px|'''Figure 7. Phased View of the Generic Incremental Commitment Spiral Model Process (Pew and Mavor 2007).''' Reprinted with permission by the National Academy of Sciences, Courtesy of National Academies Press, Washington, D.C. All other rights are reserved by the copyright owner.]]
 
 
 
The top row of activities in Figure 7 indicates that a number of system aspects are being concurrently engineered at an increasing level of understanding, definition, and development. The most significant of these aspects are shown in Figure 8, an extension of a similar ''“hump diagram”'' view of concurrently engineered software activities developed as part of the RUP (Kruchten 1999).
 
 
 
[[File:KF_ICSMActivityCategories.png|thumb|center|900px|'''Figure 8. ICSM Activity Categories and Level of Effort (Pew and Mavor 2007).''' Reprinted with permission by the National Academy of Sciences, Courtesy of National Academies Press, Washington, D.C. All other rights are reserved by the copyright owner.]]
 
 
 
As with the RUP version, the magnitude and shape of the levels of effort will be risk-driven and likely to vary from project to project. Figure 8 indicates that a great deal of concurrent activity occurs within and across the various ICSM phases, all of which need to be ''"synchronized and stabilized,"'' a best-practice phrase taken from ''Microsoft Secrets'' (Cusumano and Selby 1996) to keep the project under control.   
 
  
The review processes and use of independent experts are based on the highly successful AT&T Architecture Review Board procedures described in “Architecture Reviews: Practice and Experience” (Maranzano et al. 2005). Figure 9 shows the content of the feasibility evidence description. Showing feasibility of the concurrently developed elements helps synchronize and stabilize the concurrent activities.
+
Emergence is common in nature. The pungent gas ammonia results from the chemical combination of two odorless gases, hydrogen and nitrogen. As individual parts, feathers, beaks, wings, and gullets do not have the ability to overcome gravity. Properly connected in a bird, however, they create the emergent behavior of flight. What we refer to as “self-awareness” results from the combined effect of the interconnected and interacting neurons that make up the brain (Hitchins 2007, 7).  
  
[[File:KF_FeasibilityEvidenceDescription.png|thumb|center|1100px|'''Figure 9. Feasibility Evidence Description Content (Pew and Mavor 2007).''' Reprinted with permission by the National Academy of Sciences, Courtesy of National Academies Press, Washington, D.C. All other rights are reserved by the copyright owner.]]
+
Hitchins also notes that technological systems exhibit emergence. We can observe a number of levels of outcome which arise from interaction between elements in an {{Term|Engineered System (glossary)|engineered system}} context.  At a simple level, some system outcomes or {{Term|Attribute (glossary)|attributes}} have a fairly simple and well defined mapping to their {{Term|Element (glossary)|elements}}; for example, center of gravity or top speed of a vehicle result from a combination of element properties and how they are combined. Other behaviors can be associated with these simple outcomes, but their value emerges in {{Term|Complex (glossary)|complex}} and less predictable ways across a system. The single lap performance of a vehicle around a track is related to center of gravity and speed; however, it is also affected by driver skill, external conditions, component ware, etc. Getting the 'best' performance from a vehicle can only be achieved by a combination of good design and feedback from real laps under race conditions.  
  
The operations commitment review (OCR) is different in that it addresses the often-higher operational risks of fielding an inadequate system. In general, stakeholders will experience a two- to ten-fold increase in commitment level while going through the sequence of engineering certification review (ECR) to design certification review (DCR) {{Term|Milestone (glossary)|milestones}}, but the increase in going from DCR to OCR can be much higher. These commitment levels are based on typical cost profiles across the various stages of the acquisition life cycle.
+
There are also outcomes which are less tangible and which come as a surprise to both system developers and {{Term|User (glossary)|users}}. How does lap time translate into a winning motor racing team? Why is a sports car more desirable to many than other vehicles with performances that are as good or better? 
  
===Underlying ICSM Principles===
+
Emergence can always be observed at the highest level of system. However, Hitchins (2007, 7) also points out that to the extent that the systems elements themselves can be considered as systems, they also exhibit emergence. Page (Page 2009) refers to emergence as a “macro-level property.” Ryan (Ryan  2007) contends that emergence is coupled to {{Term|Scope (glossary)|scope}} rather than system hierarchical levels. In Ryan’s terms, scope has to do with spatial dimensions (how system elements are related to each other) rather than hierarchical levels.
ICSM has four underlying principles which must be followed:
 
#Stakeholder value-based system definition and evolution;
 
#Incremental commitment and accountability;
 
#Concurrent system and software definition and development; and
 
#Evidence and risk-based decision making.
 
  
===Model Experience to Date===
+
Abbott (Abbott 2006) does not disagree with the general definition of emergence as discussed above. However, he takes issue with the notion that emergence operates outside the bounds of classical physics. He says that “such higher-level entities…can always be reduced to primitive physical forces.”
  
The National Research Council Human-Systems Integration study (2008) found that the ICSM processes and principles correspond well with best commercial practices, as described in the [[Next Generation Medical Infusion Pump Case Study]] in Part 7.  Further examples are found in ''Human-System Integration in the System Development Process: A New Look'' (Pew and Mavor 2007, chap. 5), ''Software Project Management'' (Royce 1998, Appendix D), and the annual series of "Top Five Quality Software Projects", published in CrossTalk (2002-2005).
+
Bedau and Humphreys (2008) and Francois (2004) provide comprehensive descriptions of the philosophical and scientific background of emergence.
  
==Agile and Lean Processes==
+
==Types of Emergence==
  
According to the INCOSE ''Systems Engineering Handbook'' 3.2.2, ''“Project execution methods can be described on a continuum from 'adaptive' to 'predictive.' Agile methods exist on the 'adaptive' side of this continuum, which is not the same as saying that agile methods are 'unplanned' or 'undisciplined,'” ''(INCOSE 2011, 179). Agile development methods can be used to support iterative life cycle models, allowing flexibility over a linear process that better aligns with the planned life cycle for a system. They primarily emphasize the development and use of tacit interpersonal knowledge as compared to explicit documented knowledge, as evidenced in the four value propositions in the '''"Agile Manifesto"''':
+
A variety of definitions of types of emergence exists. See Emmeche et al. (Emmeche et al. 1997), Chroust (Chroust 2003) and O’Connor and Wong (O’Connor and Wong 2006) for specific details of some of the variants. Page (Page 2009) describes three types of emergence: "simple", "weak", and "strong".
  
<blockquote>''We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value'' </blockquote>
+
According to Page, '''simple emergence''' is generated by the combination of element properties and relationships and occurs in non-complex or “ordered”  systems (see [[Complexity]]) (2009). To achieve the emergent property of “controlled flight” we cannot consider only the wings, or the control system, or the propulsion system. All three must be considered, as well as the way these three are interconnected-with each other, as well as with all the other parts of the aircraft. Page suggests that simple emergence is the only type of emergence that can be predicted. This view of emergence is also referred to as {{Term|Synergy (glossary)|synergy}} (Hitchins 2009).
<blockquote>
 
* '''''Individuals and interactions''' over processes and tools;''
 
* '''''Working software''' over comprehensive documentation;''
 
* '''''Customer collaboration''' over contract negotiation; and''
 
* '''''Responding to change''' over following a plan.''
 
</blockquote>
 
<blockquote>''That is, while there is value in the items on the right, we value the items on the left more.''  (Agile Alliance 2001)</blockquote>
 
  
Lean processes are often associated with agile methods, although they are more scalable and applicable to high-assurance systems. Below, some specific agile methods are presented, and the evolution and content of lean methods is discussed. Please see "Primary References", "Additional References", and the [[Lean Engineering]] article for more detail on specific agile and lean processes.
+
Page describes '''weak emergence''' as expected emergence which is desired (or at least allowed for) in the system {{Term|Structure (glossary)|structure}} (2009). However, since weak emergence is a product of a complex system, the actual level of emergence cannot be predicted just from knowledge of the characteristics of the individual system {{Term|Component (glossary)|components}}.
  
===Scrum===
+
The term '''strong emergence''' is used to describe unexpected emergence; that is, emergence not observed until the system is simulated or tested or, more alarmingly, until the system encounters in operation a situation that was not anticipated during design and development.  
Figure 10 shows an example of Scrum as an agile process flow. As with most other agile methods, Scrum uses the evolutionary sequential process shown in Table 1 (above) and described in [[System Life Cycle Process Drivers and Choices#Fixed-Requirements and Evolutionary Development Processes|Fixed-Requirements and Evolutionary Development Processes]] section in which systems capabilities are developed in short periods, usually around 30 days. The project then re-prioritizes its backlog of desired features and determines how many features the team (usually 10 people or less) can develop in the next 30 days.  
 
  
Figure 10 also shows that once the features to be developed for the current Scrum have been expanded (usually in the form of informal stories) and allocated to the team members, the team establishes a daily rhythm of starting with a short meeting at which each team member presents a roughly one-minute summary describing progress since the last Scrum meeting, potential obstacles, and plans for the upcoming day.  
+
Strong emergence may be evident in failures or shutdowns. For example, the US-Canada Blackout of 2003 as described by the US-Canada Power System Outage Task Force (US-Canada Power Task Force 2004) was a case of cascading shutdown that resulted from the {{Term|Design (glossary)|design}} of the system. Even though there was no equipment failure, the shutdown was systemic. As Hitchins points out, this example shows that emergent properties are not always beneficial (Hitchins 2007, 15).  
  
[[File:Tale_of_Two_Implementations_Schwaber.jpg|thumb|center|600px|'''Figure 10. Example Agile Process Flow: Scrum (Boehm and Turner 2004).''' Reprinted with permission of Ken Schwaber. All other rights are reserved by the copyright owner.]]
+
Other authors make a different distinction between the ideas of strong, or unexpected, emergence and unpredictable emergence:
  
====Architected Agile Methods====
+
*Firstly, there are the unexpected properties that could have been predicted but were not considered in a systems development: "Properties which are unexpected by the observer because of his incomplete data set, with regard to the phenomenon at hand" (Francois, C. 2004, 737).  According to Jackson et al. (Jackson et al. 2010), a desired level of emergence is usually achieved by iteration. This may occur as a result of evolutionary {{Term|Process (glossary)|processes}}, in which element properties and combinations are "selected for", depending on how well they contribute to a systems effectiveness against {{Term|Environment (glossary)|environmental}} pressures or by iteration of design parameters through {{Term|Simulation (glossary)|simulation}} or build/test cycles. Taking this view, the specific values of weak emergence can be refined and examples of strong emergence can be considered in subsequent iterations so long as they are amenable to analysis.  
Over the last decade, several organizations have been able to scale up agile methods by using two layers of ten-person Scrum teams. This involves, among other things, having each Scrum team’s daily meeting followed up by a daily meeting of the Scrum team leaders discussing up-front investments in evolving system architecture (Boehm et al. 2010). Figure 11 shows an example of the Architected Agile approach.  
 
  
[[File:Example_of_Architected_Agile_Process_Replacement_070912.png|thumb|center|650px|'''Figure 11. Example of Architected Agile Process (Boehm 2009).''' Reprinted with permission of Barry Boehm on behalf of USC-CSSE. All other rights are reserved by the copyright owner.]]
+
*Secondly, there are unexpected properties which cannot be predicted from the properties of the system’s components: "Properties which are, in and of themselves, not derivable a priori from the behavior of the parts of the system" (Francois, C. 2004, 737). This view of emergence is a familiar one in social or natural sciences, but more controversial in {{Term|Engineering (glossary)|engineering}}.  We should distinguish between a theoretical and a practical unpredictability (Chroust 2002). The weather forecast is theoretically predictable, but beyond certain limited accuracy practically impossible due to its {{Term|Chaos (glossary)|chaotic}} nature. The emergence of consciousness in human beings cannot be deduced from the physiological properties of the brain.  For many, this genuinely unpredictable type of complexity has limited value for engineering.  (See '''Practical Considerations''' below.)
  
===Agile Practices and Principles===
+
A type of system particularly subject to strong emergence is the {{Term|System of Systems (SoS) (glossary)}}. The reason for this is that the SoS, by definition, is composed of different systems that were designed to operate independently. When these systems are operated together, the interaction among the parts of the system is likely to result in unexpected emergence. Chaotic or truly unpredictable emergence is likely for this class of systems.
As seen with the Scrum and architected agile methods, "generally-shared" principles are not necessarily "uniformly followed". However, there are some general practices and principles shared by most agile methods:
 
  
*The project team understands, respects, works, and behaves within a defined SE process;
+
==Emergent Properties==
*The project is executed as fast as possible with minimum down time or staff diversion during the project and the critical path is managed;
 
*All key players are physically or electronically collocated, and "notebooks" are considered team property available to all;
 
*Baseline management and change control are achieved by formal, oral agreements based on “make a promise—keep a promise” discipline. Participants hold each other accountable;
 
*Opportunity exploration and risk reduction are accomplished by expert consultation and rapid model verification coupled with close customer collaboration;
 
*Software development is done in a rapid development environment while hardware is developed in a multi-disciplined model shop; and
 
*A culture of constructive confrontation pervades the project organization. The team takes ownership for success; it is never “someone else’s responsibility.”
 
  
Agile development principles (adapted for SE) are as follows (adapted from ''Principles behind the Agile Manifesto'' (Beedle et al. 2009)):
+
{{Term|Emergent Property (glossary)|Emergent properties}} can be defined as follows: “A property of a complex system is said to be ‘emergent’ [in the case when], although it arises out of the properties and relations characterizing its simpler constituents, it is neither predictable from, nor reducible to, these lower-level characteristics” (Honderich 1995, 224).  
  
#First, satisfy the customer through early and continuous delivery of valuable software (and other system elements).
+
All systems can have emergent properties which may or may not be predictable or amenable to {{Term|Model (glossary)|modeling}}, as discussed above. Much of the literature on {{Term|Complexity (glossary)|complexity}} includes emergence as a defining characteristic of complex systems. For example, Boccara (Boccara 2004) states that “The appearance of emergent properties is the single most distinguishing feature of complex systems”. In general, the more ordered a systems is, the easier its emergent properties are to predict. The more complex a system is, the more difficult predicting its emergent properties becomes.
#Welcome changing requirements, even late in development; agile processes harness change for the customer’s competitive advantage.
 
#Deliver working software (and other system elements) frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
 
#Business personnel and developers must work together daily throughout the project.
 
#Build projects around motivated individuals; give them the environment, support their needs, and trust them to get the job done.
 
#The most efficient and effective method of conveying information is face-to-face conversation.
 
#Working software (and other system elements) is the primary measure of progress.
 
#Agile processes promote sustainable development; the sponsors, developers, and users should be able to maintain a constant pace indefinitely.
 
#Continuous attention to technical excellence and good design enhances agility.
 
#Simplicity—the art of maximizing the amount of work not done—is essential.
 
#The best architectures, requirements, and designs emerge from self-organizing teams.
 
  
A team should reflect on how to become more effective at regular intervals and then tune and adjust its behavior accordingly. This self-reflection is a critical aspect for projects that implement agile processes.
+
Some practitioners use the term “emergence” only when referring to “strong emergence”. These practitioners refer to the other two forms of emergent behavior as synergy or “system level behavior” (Chroust 2002). Taking this view, we would reserve the term "Emergent Property" for unexpected properties, which can be modeled or refined through iterations of the systems development.
  
===Lean Systems Engineering and Development===
+
Unforeseen emergence causes nasty shocks. Many believe that the main job of the {{Term|Systems Approach (glossary)|systems approach}} is to prevent undesired emergence in order to minimize the {{Term|Risk (glossary)|risk}} of unexpected and potentially undesirable outcomes. This review of emergent properties is often specifically associated with identifying and avoiding system failures (Hitchins 2007).
  
====Origins====
+
Good SE isn't just focused on avoiding system failure, however. It also involves maximizing {{Term|Opportunity (glossary)|opportunity}} by understanding and exploiting emergence in {{Term|Engineered System (glossary)|engineered systems}} to create the required system level characteristics from synergistic interactions between the {{Term|Component (glossary)|components}}, not just from the components themselves (Sillitto 2010).
As the manufacturing of consumer products such as automobiles became more diversified, traditional pre-planned mass-production approaches had increasing problems with quality and adaptability.  Lean manufacturing systems such as the Toyota Production System (TPS) (Ohno 1988) were much better suited to accommodate diversity, to improve quality, and to support just-in-time manufacturing that could rapidly adapt to changing demand patterns without having to carry large, expensive inventories.
 
  
Much of this transformation was stimulated by the work of W. Edwards Deming, whose Total Quality Management (TQM) approach shifted responsibility for quality and productivity from planners and inspectors to the production workers who were closer to the real processes (Deming 1982). Deming's approach involved everyone in the manufacturing organization in seeking continuous process improvement, or "Kaizen".
+
One important group of emergent properties include properties such as {{Term|Agility (glossary)|agility}} and {{Term|Resilience (glossary)|resilience}}. These are critical system properties that are not meaningful except at the whole system level.
  
Some of the TQM techniques, such as statistical process control and repeatability, were more suited to repetitive manufacturing processes than to knowledge work such as systems engineering (SE) and software engineering (SwE).  Others, such as early error elimination, waste elimination, workflow stabilization, and Kaizen, were equally applicable to knowledge work.  Led by Watts Humphrey, TQM became the focus for the Software Capability Maturity Model (Humphrey 1987; Paulk et al. 1994) and the CMM-Integrated or CMMI, which extended its scope to include systems engineering (Chrissis et al. 2003).  One significant change was the redefinition of Maturity Level 2 from "Repeatable" to "Managed".
+
==Practical Considerations==
  
The Massachusetts Institute of Technology (MIT) conducted studies of the TPS, which produced a similar approach that was called the "Lean Production System" (Krafcik 1988; Womack et al. 1990).   Subsequent development of "lean thinking" and related work at MIT led to the Air Force-sponsored Lean Aerospace Initiative (now called the Lean Advancement Initiative), which applied lean thinking to SE (Murman 2003, Womack-Jones 2003).  Concurrently, lean ideas were used to strengthen the scalability and dependability aspects of agile methods for software (Poppendieck 2003; Larman-Vodde 2009). The Kanban flow-oriented approach has been successfully applied to software development (Anderson 2010).
+
As mentioned above, one way to manage emergent properties is through iteration.  The requirements to iterate the design of an engineered system to achieve desired emergence results in a {{Term|Design (glossary)|design}} {{Term|Process (glossary)|process}} are more lengthy than those needed to design an ordered system. Creating an engineered system capable of such iteration may also require a more configurable or modular solution.  The result is that complex systems may be more costly and time-consuming to develop than ordered ones, and the cost and time to develop is inherently less predictable.  
  
====Principles====
+
Sillitto (2010) observes that “engineering design domains that exploit emergence have good mathematical models of the domain, and rigorously control variability of components and subsystems, and of process, in both design and operation”. The iterations discussed above can be accelerated by using simulation and modeling, so that not all the iterations need to involve building real systems and operating them in the real environment.
Each of these efforts has developed a similar but different set of Lean principles. For systems engineering, the current best source is ''Lean for Systems Engineering'', the product of several years’ work by the INCOSE Lean SE working group (Oppenheim 2011).  It is organized into six principles, each of which is elaborated into a set of lean enabler and sub-enabler patterns for satisfying the principle:
 
  
#'''Value.''' Guide the project by determining the value propositions of the customers and other key stakeholders.  Keep them involved and manage changes in their value propositions.
+
The idea of domain models is explored further by Hybertson in the context of general models or {{Term|Pattern (glossary)|patterns}} learned over time and captured in a model space (Hybertson 2009). Hybertson states that knowing what emergence will appear from a given design, including side effects, requires hindsight. For a new type of problem that has not been solved, or a new type of system that has not been built, it is virtually impossible to predict emergent behavior of the solution or system. Some hindsight, or at least some insight, can be obtained by modeling and iterating a specific system design; however, iterating the design within the development of one system yields only limited hindsight and often does not give a full sense of emergence and side effects.
#'''Map the Value Stream (Plan the Program).'''  This includes thorough requirements specification, the concurrent exploration of trade spaces among the value propositions, COTS evaluation, and technology maturity assessment, resulting in a full project plan and set of requirements.
 
#'''Flow.'''  Focus on the project’s critical path activities to avoid expensive work stoppages, including coordination with external suppliers.
 
#'''Pull.'''  Pull the next tasks to be done based on prioritized needs and dependencies.  If a need for the task can’t be found, reject it as waste.
 
#'''Perfection.'''  Apply continuous process improvement to approach perfection.  Drive defects out early to get the system Right The First #'''Time,''' vs. fixing them during inspection and test.  Find and fix root causes rather than symptoms.
 
#'''Respect for People.'''  Flow down responsibility, authority, and accountability to all personnel.  Nurture a learning environment.  Treat people as the organization’s most valued assets. (Oppenheim 2011)
 
  
These lean SE principles are highly similar to the four underlying incremental commitment spiral model principles.
+
True hindsight and understanding comes from building multiple systems of the same type and deploying them, then observing their emergent behavior in operation and the side effects of placing them in their environments. If those observations are done systematically, and the emergence and side effects are distilled and captured in relation to the design of the systems — including the variations in those designs — and made available to the community, then we are in a position to predict and exploit the emergence.  
  
*'''Principle 1: Stakeholder value-based system definition and evolution''', addresses the lean SE principles of value, value stream mapping, and respect for people (developers are success-critical stakeholders in the ICSM).
+
Two factors are discovered in this type of testing environment: what works (that is, what emergent behavior and side effects are desirable); and what does not work (that is, what emergent behavior and side effects are undesirable). What works affirms the design. What does not work calls for corrections in the design. This is why multiple systems, especially complex systems, must be built and deployed over time and in different environments; to learn and understand the relations among the design, emergent behavior, side effects, and environment.
*'''Principle 2: Incremental commitment and accountability''', partly addresses the pull principle, and also addresses respect for people (who are accountable for their commitments).  
 
*'''Principle 3: Concurrent system and software definition and development''', partly addresses both value stream mapping and flow.
 
*'''Principle 4: Evidence and risk-based decision making''', uses evidence of achievability as its measure of success. Overall, the ICSM principles are somewhat light on continuous process improvement, and the lean SE principles are somewhat insensitive to requirements emergence in advocating a full pre-specified project plan and set of requirements.
 
  
See [[Lean Engineering]] for more information.
+
These two types of captured learning correspond respectively to patterns and “{{Term|Antipattern (glossary)|antipatterns}}”, or patterns of failure, both of which are discussed in a broader context in the [[Principles of Systems Thinking]] and [[Patterns of Systems Thinking]] topics.
  
==References==
+
The use of iterations to refine the values of emergent properties, either across the life of a single system or through the development of patterns encapsulating knowledge gained from multiple developments, applies most easily to the discussion of strong emergence above. In this sense, those properties which can be observed but cannot be related to design choices are not relevant to a systems approach. However, they can have value when dealing with a combination of engineering and managed problems which occur for system of systems contexts (Sillitto 2010). (See [[Systems Approach Applied to Engineered Systems]].)
  
 +
==References==
 
===Works Cited===
 
===Works Cited===
 +
Abbott, R. 2006. "Emergence Explained: Getting Epiphenomena to Do Real Work". ''Complexity.'' 12(1) (September-October): 13-26.
  
Agile Alliance. 2001. “Manifesto for Agile Software Development.” http://agilemanifesto.org/.
+
Bedau, M.A. and P. Humphreys, P. (eds.). 2008. "Emergence" In Contemporary Readings in Philosophy and Science. Cambridge, MA, USA: The MIT Press.  
 
 
Anderson, D. 2010. ''Kanban'', Sequim, WA: Blue Hole Press.
 
 
 
Boehm, B. 1996. "Anchoring the Software Process." IEEE ''Software'' 13(4): 73-82.
 
 
 
Boehm, B. and J. Lane. 2007. “Using the Incremental Commitment Model to Integrate System Acquisition, Systems Engineering, and Software Engineering.” ''CrossTalk.'' 20(10) (October 2007): 4-9.
 
 
 
Boehm, B., J. Lane, S. Koolmanjwong, and R. Turner. 2010. “Architected Agile Solutions for Software-Reliant Systems,” in Dingsoyr, T., T. Dyba., and N. Moe (eds.), ''Agile Software Development: Current Research and Future Directions.'' New York, NY, USA: Springer.
 
 
 
Boehm, B. and R. Turner. 2004. ''Balancing Agility and Discipline.''  New York, NY, USA: Addison-Wesley.
 
 
 
Castellano, D.R. 2004. “Top Five Quality Software Projects.” ''CrossTalk.'' 17(7) (July 2004): 4-19. Available at: http://www.crosstalkonline.org/storage/issue-archives/2004/200407/200407-0-Issue.pdf.
 
  
Chrissis, M., M. Konrad, and S. Shrum. 2003. ''CMMI: Guidelines for Process Integration and Product Improvement.'' New York, NY, USA, Addison Wesley.
+
Boccara, N. 2004. ''Modeling Complex Systems.'' New York: Springer-Verlag.
  
Deming, W.E. 1982. ''Out of the Crisis.'' Cambridge, MA, USA: MIT.
+
Checkland, P. 1999. ''Systems Thinking, Systems Practice.'' New York, NY, USA: John Wiley & Sons.  
  
Fairley, R. 2009. ''Managing and Leading Software Projects.'' New York, NY, USA: John Wiley & Sons.
+
Chroust. G. 2002. "Emergent Properties in Software Systems." 10th Interdisciplinary Information Management Talks; Hofer, C. and Chroust, G. (eds.). Verlag Trauner Linz, pages 277-289.
  
Forsberg, K. 1995. "If I Could Do That, Then I Could…’ System Engineering in a Research and Development Environment." Proceedings of the Fifth International Council on Systems Engineering (INCOSE) International Symposium. 22-26 July 1995. St Louis, MO, USA.
+
Chroust, G., C. Hofer, C. Hoyer (eds.). 2005. ''The Concept of Emergence in Systems Engineering." The 12th Fuschl Conversation, April 18-23, 2004, Institute for Systems Engineering and Automation, Johannes Kepler University Linz. pp. 49-60.
  
Forsberg, K., H. Mooz, and H. Cotterman. 2005. ''Visualizing Project Management,'' 3rd ed. New York, NY, USA: John Wiley & Sons.
+
Emmeche, C., S. Koppe, and F. Stjernfelt. 1997. "Explaining Emergence: Towards an Ontology of Levels." ''Journal for General Philosophy of Science.'' 28: 83-119 (1997). Accessed December 3 2014 at Claus Emmeche http://www.nbi.dk/~emmeche/coPubl/97e.EKS/emerg.html.  
  
Humphrey, W., 1987. “Characterizing the Software Process: A Maturity Framework.” Pittsburgh, PA, USA: CMU Software Engineering Institute. CMU/SEI-87-TR-11.
+
Francois, C. 2004. ''International Encyclopedia of Systems and Cybernetics'', 2nd edition, 2 volumes. K.G.Saur, Munchen.
  
Jarzombek, J. 2003. “Top Five Quality Software Projects.” ''CrossTalk.'' 16(7) (July 2003): 4-19. Available at: http://www.crosstalkonline.org/storage/issue-archives/2003/200307/200307-0-Issue.pdf.
+
Hitchins, D. 2007. ''Systems Engineering: A 21st Century Systems Methodology''. Hoboken, NJ, USA: John Wiley & Sons.  
 
Krafcik, J. 1988. "Triumph of the lean production system". ''Sloan Management Review.'' 30(1): 41–52.
 
  
Kruchten, P. 1999. ''The Rational Unified Process''. New York, NY, USA: Addison Wesley.
+
Honderich. T. 1995. ''The Oxford Companion to Philosophy''. New York: Oxford University Press.
 
Larman , C. and B. Vodde. 2009. ''Scaling Lean and Agile Development.'' New York, NY, USA: Addison Wesley.
 
  
Maranzano, J.F., S.A. Rozsypal, G.H. Zimmerman, G.W. Warnken, P.E. Wirth, D.M. Weiss. 2005. “Architecture Reviews: Practice and Experience.” IEEE ''Software.'' 22(2): 34-43.
+
Hybertson, D. 2009. ''Model-Oriented Systems Engineering Science: A Unifying Framework for Traditional and Complex Systems''. Auerbach/CRC Press, Boca Raton, FL.
  
Murman, E. 2003. ''Lean Systems Engineering I, II, Lecture Notes'', MIT Course 16.885J, Fall. Cambridge, MA, USA: MIT.
+
Jackson, S., D. Hitchins, and H. Eisner. 2010. "What is the Systems Approach?" INCOSE ''Insight.'' 13(1) (April 2010): 41-43.  
  
Oppenheim, B. 2011. ''Lean for Systems Engineering.'' Hoboken, NJ: Wiley.  
+
O’Connor, T. and H. Wong. 2006. "Emergent Properties". ''Stanford Encyclopedia of Philosophy''. Accessed December 3 2014 at Stanford Encyclopedia of Philosophy http://plato.stanford.edu/entries/properties-emergent/.  
  
Paulk, M., C. Weber, B. Curtis, and M. Chrissis. 1994. ''The Capability Maturity Model: Guidelines for Improving the Software Process.'' Reading, MA, USA: Addison Wesley.  
+
Page, S.E. 2009. ''Understanding Complexity.'' The Great Courses. Chantilly, VA, USA: The Teaching Company.  
  
Pew, R. and A. Mavor (eds.). 2007. ''Human-System Integration in The System Development Process: A New Look''. Washington, DC, USA: The National Academies Press.
+
Ryan, A. 2007. "Emergence is Coupled to Scope, Not Level." ''Complexity.'' 13(2) (November-December).  
 
Poppendieck, M. and T. Poppendieck. 2003. ''Lean Software Development: An Agile Toolkit for Software Development Managers.'' New York, NY, USA: Addison Wesley.  
 
  
Spruill, N. 2002. “Top Five Quality Software Projects.” ''CrossTalk.'' 15(1) (January 2002): 4-19. Available at: http://www.crosstalkonline.org/storage/issue-archives/2002/200201/200201-0-Issue.pdf.
+
Sillitto, H.G. 2010. "Design Principles for Ultra-Large-Scale Systems". Proceedings of the 20th Annual International Council on Systems Engineering (INCOSE) International Symposium, July 2010, Chicago, IL, USA, reprinted in “The Singapore Engineer”, April 2011.
 
Stauder, T. “Top Five Department of Defense Program Awards.” ''CrossTalk.'' 18(9) (September 2005): 4-13. Available at http://www.crosstalkonline.org/storage/issue-archives/2005/200509/200509-0-Issue.pdf.
 
 
Womack, J., D. Jones, and D Roos. 1990. ''The Machine That Changed the World: The Story of Lean Production.'' New York, NY, USA: Rawson Associates.
 
  
Womack, J. and D. Jones. 2003. ''Lean Thinking''. New York, NY, USA: The Free Press.
+
US-Canada Power System Outage Task Force. 2004. ''Final Report on the August 14, 2003 Blackout in the United States and Canada: Causes and Recommendations''. April, 2004. Washington-Ottawa. Accessed December 3 2014 at US Department of Energy http://energy.gov/oe/downloads/blackout-2003-final-report-august-14-2003-blackout-united-states-and-canada-causes-and
  
 
===Primary References===
 
===Primary References===
Beedle, M., et al. 2009. "[[The Agile Manifesto: Principles behind the Agile Manifesto]]". in ''The Agile Manifesto'' [database online]. Accessed 2010. Available at: www.agilemanifesto.org/principles.html.
 
 
Boehm, B. and R. Turner. 2004. ''[[Balancing Agility and Discipline]].'' New York, NY, USA: Addison-Wesley.
 
 
Fairley, R. 2009. ''[[Managing and Leading Software Projects]].'' New York, NY, USA: J. Wiley & Sons.
 
 
Forsberg, K., H. Mooz, and H. Cotterman. 2005. ''[[Visualizing Project Management]],'' 3rd ed.  New York, NY, USA: J. Wiley & Sons.
 
 
INCOSE. 2012. ''[[INCOSE Systems Engineering Handbook|Systems Engineering Handbook]]: A Guide for System Life Cycle Processes and Activities''. Version 3.2.2. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2.2.
 
  
Lawson, H. 2010. ''[[A Journey Through the Systems Landscape]].'' Kings College, UK: College Publications.
+
Emmeche, C., S. Koppe, and F. Stjernfelt. 1997. "[[Explaining Emergence]]: Towards an Ontology of Levels." ''Journal for General Philosophy of Science'', 28: 83-119 (1997). http://www.nbi.dk/~emmeche/coPubl/97e.EKS/emerg.html.  
  
Pew, R., and A. Mavor (eds.). 2007. ''[[Human-System Integration in the System Development Process]]: A New Look.'' Washington, DC, USA: The National Academies Press.
+
Hitchins, D. 2007. ''[[Systems Engineering: A 21st Century Systems Methodology]]''. Hoboken, NJ, USA: John Wiley & Sons.
  
Royce, W.E. 1998. ''[[Software Project Management]]: A Unified Framework''. New York, NY, USA: Addison Wesley.
+
Page, S. E. 2009. ''[[Understanding Complexity]]''. The Great Courses. Chantilly, VA, USA: The Teaching Company.
  
 
===Additional References===
 
===Additional References===
Anderson, D. 2010. ''Kanban''. Sequim, WA, USA: Blue Hole Press.
 
 
Baldwin, C. and K. Clark. 2000. ''Design Rules: The Power of Modularity.'' Cambridge, MA, USA: MIT Press.
 
 
Beck, K. 1999. ''Extreme Programming Explained.'' New York, NY, USA: Addison Wesley.
 
 
Beedle, M., et al. 2009. "The Agile Manifesto: Principles behind the Agile Manifesto" in The Agile Manifesto [database online]. Accessed 2010. Available at: www.agilemanifesto.org/principles.html
 
 
Biffl, S., A. Aurum, B. Boehm, H. Erdogmus, and P. Gruenbacher (eds.). 2005. ''Value-Based Software Engineering''. New York, NY, USA: Springer.
 
 
Boehm, B. 1988. “A Spiral Model of Software Development.” IEEE ''Computer.'' 21(5): 61-72.
 
 
Boehm, B. 2006. “Some Future Trends and Implications for Systems and Software Engineering Processes.” ''Systems Engineering.'' 9(1): 1-19.
 
 
Boehm, B., A. Egyed, J. Kwan, D. Port, A. Shah, and R. Madachy. 1998. “Using the WinWin Spiral Model: A Case Study.” IEEE ''Computer.'' 31(7): 33-44.
 
 
Boehm, B., J. Lane, S. Koolmanojwong, and R. Turner. 2013 (in press). ''Embracing the Spiral Model: Creating Successful Systems with the Incremental Commitment Spiral Model.'' New York, NY, USA: Addison Wesley.
 
 
Castellano, D.R. 2004. “Top Five Quality Software Projects.” ''CrossTalk.'' 17(7) (July 2004): 4-19. Available at: http://www.crosstalkonline.org/storage/issue-archives/2004/200407/200407-0-Issue.pdf.
 
 
Checkland, P. 1981. ''Systems Thinking, Systems Practice''.  New York, NY, USA: Wiley.
 
 
Crosson, S. and B. Boehm. 2009. “Adjusting Software Life Cycle Anchorpoints: Lessons Learned in a System of Systems Context.” Proceedings of the Systems and Software Technology Conference, 20-23 April 2009, Salt Lake City, UT, USA.
 
 
Dingsoyr, T., T. Dyba. and N. Moe (eds.). 2010. "Agile Software Development: Current Research and Future Directions.”  Chapter in B. Boehm, J. Lane, S. Koolmanjwong, and R. Turner, ''Architected Agile Solutions for Software-Reliant Systems.'' New York, NY, USA: Springer.
 
 
Dorner, D. 1996. ''The Logic of Failure''.  New York, NY, USA: Basic Books.
 
 
Faisandier, A. 2012. ''Systems Architecture and Design''. Belberaud, France: Sinergy'Com.
 
 
Forsberg, K. 1995. "'If I Could Do That, Then I Could…' System Engineering in a Research and Development Environment.” Proceedings of the Fifth Annual International Council on Systems Engineering (INCOSE) International Symposium. 22-26 July 1995. St. Louis, MO, USA.
 
 
Forsberg, K. 2010. “Projects Don’t Begin With Requirements.” Proceedings of the IEEE Systems Conference. 5-8 April 2010. San Diego, CA, USA.
 
 
Gilb, T. 2005.  ''Competitive Engineering''.  Maryland Heights, MO, USA: Elsevier Butterworth Heinemann.
 
 
Goldratt, E. 1984. ''The Goal''.  Great Barrington, MA, USA: North River Press.
 
 
Hitchins, D.  2007. ''Systems Engineering: A 21st Century Systems Methodology.''  New York, NY, USA: Wiley.
 
 
Holland, J. 1998. ''Emergence''. New York, NY, USA: Perseus Books.
 
 
ISO/IEC. 2010. ''Systems and Software Engineering, Part 1: Guide for Life Cycle Management.''  Geneva, Switzerland: International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC), ISO/IEC 24748-1:2010.
 
 
ISO/IEC/IEEE. 2015. ''Systems and Software Engineering -- System Life Cycle Processes.'' Geneva, Switzerland: International Organisation for Standardisation / International Electrotechnical Commissions. ISO/IEC/IEEE 15288:2015.
 
 
ISO/IEC. 2003. ''Systems Engineering — A Guide for The Application of ISO/IEC 15288 System Life Cycle Processes.'' Geneva, Switzerland: International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC), ISO/IEC 19760:2003 (E).
 
 
Jarzombek, J. 2003. “Top Five Quality Software Projects.” ''CrossTalk.'' 16(7) (July 2003): 4-19. Available at: http://www.crosstalkonline.org/storage/issue-archives/2003/200307/200307-0-Issue.pdf.
 
  
Kruchten, P. 1999. ''The Rational Unified Process.'' New York, NY, USA: Addison Wesley.
+
Sheard, S.A. and A. Mostashari. 2008. "Principles of Complex Systems for Systems Engineering." ''Systems Engineering''. 12: 295-311.
  
Landis, T. R. 2010. ''Lockheed Blackbird Family (A-12, YF-12, D-21/M-21 & SR-71).''  North Branch, MN, USA: Specialty Press.
 
 
Madachy, R. 2008. ''Software Process Dynamics''.  New York, NY, USA: Wiley.
 
 
Maranzano, J., et al. 2005. “Architecture Reviews: Practice and Experience.” IEEE ''Software.'' 22(2): 34-43.
 
 
National Research Council of the National Academies (USA). 2008. ''Pre-Milestone A and Early-Phase Systems Engineering''. Washington, DC, USA: The National Academies Press.
 
 
Osterweil, L. 1987. “Software Processes are Software Too.” Proceedings of the SEFM 2011: 9th International Conference on Software Engineering. Monterey, CA, USA.
 
 
Poppendeick, M. and T. Poppendeick. 2003. ''Lean Software Development: an Agile Toolkit.''  New York, NY, USA: Addison Wesley.
 
 
Rechtin, E. 1991. ''System Architecting: Creating and Building Complex Systems.'' Upper Saddle River, NY, USA: Prentice-Hall.
 
 
Rechtin, E., and M. Maier. 1997.  ''The Art of System Architecting''. Boca Raton, FL, USA: CRC Press.
 
 
Schwaber, K. and M. Beedle. 2002. ''Agile Software Development with Scrum''. Upper Saddle River, NY, USA: Prentice Hall.
 
 
Spruill, N. 2002. “Top Five Quality Software Projects.” ''CrossTalk.'' 15(1) (January 2002): 4-19. Available at: http://www.crosstalkonline.org/storage/issue-archives/2002/200201/200201-0-Issue.pdf.
 
 
Stauder, T. 2005. “Top Five Department of Defense Program Awards.” ''CrossTalk.'' 18(9) (September 2005): 4-13. Available at http://www.crosstalkonline.org/storage/issue-archives/2005/200509/200509-0-Issue.pdf.
 
 
Warfield, J. 1976. ''Societal Systems: Planning, Policy, and Complexity''. New York, NY, USA: Wiley.
 
 
Womack, J. and D. Jones. 1996. ''Lean Thinking.'' New York, NY, USA: Simon and Schuster.
 
 
----
 
----
 +
<center>[[Complexity|< Previous Article]] | [[Systems Fundamentals|Parent Article]] | [[Fundamentals for Future Systems Engineering|Next Article >]]</center>
  
<center>[[System Life Cycle Process Models: Vee|< Previous Article]] | [[Life Cycle Models|Parent Article]] | [[Integration of Process and Product Models|Next Article >]]</center>
+
<center>'''SEBoK v. 2.1, released 31 October 2019'''</center>
 
 
<center>'''SEBoK v. 2.0, released 1 June 2019'''</center>
 
  
[[Category:Part 3]][[Category:Topic]]
+
[[Category:Part 2]][[Category:Topic]][[Category:Systems Fundamentals]]
[[Category:Life Cycle Models]]
 

Revision as of 20:33, 19 October 2019

This topic forms part of the Systems Fundamentals knowledge area (KA). It gives the background to some of the ways in which emergenceemergence has been described, as well as an indication of current thinking on what it is and how it influences systems engineeringsystems engineering (SE) practice. It will discuss how these ideas relate to the general definitions of systemssystems given in What is a System?; in particular, how they relate to different engineered systemengineered system contexts. This topic is closely related to the complexity topic that precedes it.

Emergence is a consequence of the fundamental system conceptsconcepts of holismholism and interaction (Hitchins 2007, 27). System wholes have behaviorsbehaviors and properties arising from the organization of their elementselements and their relationships, which only become apparent when the system is placed in different environmentsenvironments.

Questions that arise from this definition include: What kinds of systems exhibit different kinds of emergence and under what conditions? Can emergence be predicted, and is it beneficial or detrimental to a system? How do we deal with emergence in the development and use of engineered systems? Can it be planned for? How?

There are many varied and occasionally conflicting views on emergence. This topic presents the prevailing views and provides references for others.


by Janet Singer, Duane Hybertson, and Rick Adcock

Overview of Emergence

As defined by Checkland, emergenceemergence is “the principle that entities exhibit properties which are meaningful only when attributed to the whole, not to its parts.” (Checkland 1999, 314). Emergent system behaviorbehavior can be viewed as a consequence of the interactions and relationships between system elementssystem elements rather than the behavior of individual elements. It emerges from a combination of the behavior and properties of the system elements and the systems structure or allowable interactions between the elements, and may be triggered or influenced by a stimulus from the systems environment.

Emergence is common in nature. The pungent gas ammonia results from the chemical combination of two odorless gases, hydrogen and nitrogen. As individual parts, feathers, beaks, wings, and gullets do not have the ability to overcome gravity. Properly connected in a bird, however, they create the emergent behavior of flight. What we refer to as “self-awareness” results from the combined effect of the interconnected and interacting neurons that make up the brain (Hitchins 2007, 7).

Hitchins also notes that technological systems exhibit emergence. We can observe a number of levels of outcome which arise from interaction between elements in an engineered systemengineered system context. At a simple level, some system outcomes or attributesattributes have a fairly simple and well defined mapping to their elementselements; for example, center of gravity or top speed of a vehicle result from a combination of element properties and how they are combined. Other behaviors can be associated with these simple outcomes, but their value emerges in complexcomplex and less predictable ways across a system. The single lap performance of a vehicle around a track is related to center of gravity and speed; however, it is also affected by driver skill, external conditions, component ware, etc. Getting the 'best' performance from a vehicle can only be achieved by a combination of good design and feedback from real laps under race conditions.

There are also outcomes which are less tangible and which come as a surprise to both system developers and usersusers. How does lap time translate into a winning motor racing team? Why is a sports car more desirable to many than other vehicles with performances that are as good or better?

Emergence can always be observed at the highest level of system. However, Hitchins (2007, 7) also points out that to the extent that the systems elements themselves can be considered as systems, they also exhibit emergence. Page (Page 2009) refers to emergence as a “macro-level property.” Ryan (Ryan 2007) contends that emergence is coupled to scopescope rather than system hierarchical levels. In Ryan’s terms, scope has to do with spatial dimensions (how system elements are related to each other) rather than hierarchical levels.

Abbott (Abbott 2006) does not disagree with the general definition of emergence as discussed above. However, he takes issue with the notion that emergence operates outside the bounds of classical physics. He says that “such higher-level entities…can always be reduced to primitive physical forces.”

Bedau and Humphreys (2008) and Francois (2004) provide comprehensive descriptions of the philosophical and scientific background of emergence.

Types of Emergence

A variety of definitions of types of emergence exists. See Emmeche et al. (Emmeche et al. 1997), Chroust (Chroust 2003) and O’Connor and Wong (O’Connor and Wong 2006) for specific details of some of the variants. Page (Page 2009) describes three types of emergence: "simple", "weak", and "strong".

According to Page, simple emergence is generated by the combination of element properties and relationships and occurs in non-complex or “ordered” systems (see Complexity) (2009). To achieve the emergent property of “controlled flight” we cannot consider only the wings, or the control system, or the propulsion system. All three must be considered, as well as the way these three are interconnected-with each other, as well as with all the other parts of the aircraft. Page suggests that simple emergence is the only type of emergence that can be predicted. This view of emergence is also referred to as synergysynergy (Hitchins 2009).

Page describes weak emergence as expected emergence which is desired (or at least allowed for) in the system structurestructure (2009). However, since weak emergence is a product of a complex system, the actual level of emergence cannot be predicted just from knowledge of the characteristics of the individual system componentscomponents.

The term strong emergence is used to describe unexpected emergence; that is, emergence not observed until the system is simulated or tested or, more alarmingly, until the system encounters in operation a situation that was not anticipated during design and development.

Strong emergence may be evident in failures or shutdowns. For example, the US-Canada Blackout of 2003 as described by the US-Canada Power System Outage Task Force (US-Canada Power Task Force 2004) was a case of cascading shutdown that resulted from the designdesign of the system. Even though there was no equipment failure, the shutdown was systemic. As Hitchins points out, this example shows that emergent properties are not always beneficial (Hitchins 2007, 15).

Other authors make a different distinction between the ideas of strong, or unexpected, emergence and unpredictable emergence:

  • Firstly, there are the unexpected properties that could have been predicted but were not considered in a systems development: "Properties which are unexpected by the observer because of his incomplete data set, with regard to the phenomenon at hand" (Francois, C. 2004, 737). According to Jackson et al. (Jackson et al. 2010), a desired level of emergence is usually achieved by iteration. This may occur as a result of evolutionary processesprocesses, in which element properties and combinations are "selected for", depending on how well they contribute to a systems effectiveness against environmentalenvironmental pressures or by iteration of design parameters through simulationsimulation or build/test cycles. Taking this view, the specific values of weak emergence can be refined and examples of strong emergence can be considered in subsequent iterations so long as they are amenable to analysis.
  • Secondly, there are unexpected properties which cannot be predicted from the properties of the system’s components: "Properties which are, in and of themselves, not derivable a priori from the behavior of the parts of the system" (Francois, C. 2004, 737). This view of emergence is a familiar one in social or natural sciences, but more controversial in engineeringengineering. We should distinguish between a theoretical and a practical unpredictability (Chroust 2002). The weather forecast is theoretically predictable, but beyond certain limited accuracy practically impossible due to its chaoticchaotic nature. The emergence of consciousness in human beings cannot be deduced from the physiological properties of the brain. For many, this genuinely unpredictable type of complexity has limited value for engineering. (See Practical Considerations below.)

A type of system particularly subject to strong emergence is the system of systems (sos)system of systems (sos). The reason for this is that the SoS, by definition, is composed of different systems that were designed to operate independently. When these systems are operated together, the interaction among the parts of the system is likely to result in unexpected emergence. Chaotic or truly unpredictable emergence is likely for this class of systems.

Emergent Properties

Emergent propertiesEmergent properties can be defined as follows: “A property of a complex system is said to be ‘emergent’ [in the case when], although it arises out of the properties and relations characterizing its simpler constituents, it is neither predictable from, nor reducible to, these lower-level characteristics” (Honderich 1995, 224).

All systems can have emergent properties which may or may not be predictable or amenable to modelingmodeling, as discussed above. Much of the literature on complexitycomplexity includes emergence as a defining characteristic of complex systems. For example, Boccara (Boccara 2004) states that “The appearance of emergent properties is the single most distinguishing feature of complex systems”. In general, the more ordered a systems is, the easier its emergent properties are to predict. The more complex a system is, the more difficult predicting its emergent properties becomes.

Some practitioners use the term “emergence” only when referring to “strong emergence”. These practitioners refer to the other two forms of emergent behavior as synergy or “system level behavior” (Chroust 2002). Taking this view, we would reserve the term "Emergent Property" for unexpected properties, which can be modeled or refined through iterations of the systems development.

Unforeseen emergence causes nasty shocks. Many believe that the main job of the systems approachsystems approach is to prevent undesired emergence in order to minimize the riskrisk of unexpected and potentially undesirable outcomes. This review of emergent properties is often specifically associated with identifying and avoiding system failures (Hitchins 2007).

Good SE isn't just focused on avoiding system failure, however. It also involves maximizing opportunityopportunity by understanding and exploiting emergence in engineered systemsengineered systems to create the required system level characteristics from synergistic interactions between the componentscomponents, not just from the components themselves (Sillitto 2010).

One important group of emergent properties include properties such as agilityagility and resilienceresilience. These are critical system properties that are not meaningful except at the whole system level.

Practical Considerations

As mentioned above, one way to manage emergent properties is through iteration. The requirements to iterate the design of an engineered system to achieve desired emergence results in a designdesign processprocess are more lengthy than those needed to design an ordered system. Creating an engineered system capable of such iteration may also require a more configurable or modular solution. The result is that complex systems may be more costly and time-consuming to develop than ordered ones, and the cost and time to develop is inherently less predictable.

Sillitto (2010) observes that “engineering design domains that exploit emergence have good mathematical models of the domain, and rigorously control variability of components and subsystems, and of process, in both design and operation”. The iterations discussed above can be accelerated by using simulation and modeling, so that not all the iterations need to involve building real systems and operating them in the real environment.

The idea of domain models is explored further by Hybertson in the context of general models or patternspatterns learned over time and captured in a model space (Hybertson 2009). Hybertson states that knowing what emergence will appear from a given design, including side effects, requires hindsight. For a new type of problem that has not been solved, or a new type of system that has not been built, it is virtually impossible to predict emergent behavior of the solution or system. Some hindsight, or at least some insight, can be obtained by modeling and iterating a specific system design; however, iterating the design within the development of one system yields only limited hindsight and often does not give a full sense of emergence and side effects.

True hindsight and understanding comes from building multiple systems of the same type and deploying them, then observing their emergent behavior in operation and the side effects of placing them in their environments. If those observations are done systematically, and the emergence and side effects are distilled and captured in relation to the design of the systems — including the variations in those designs — and made available to the community, then we are in a position to predict and exploit the emergence.

Two factors are discovered in this type of testing environment: what works (that is, what emergent behavior and side effects are desirable); and what does not work (that is, what emergent behavior and side effects are undesirable). What works affirms the design. What does not work calls for corrections in the design. This is why multiple systems, especially complex systems, must be built and deployed over time and in different environments; to learn and understand the relations among the design, emergent behavior, side effects, and environment.

These two types of captured learning correspond respectively to patterns and “antipatternsantipatterns”, or patterns of failure, both of which are discussed in a broader context in the Principles of Systems Thinking and Patterns of Systems Thinking topics.

The use of iterations to refine the values of emergent properties, either across the life of a single system or through the development of patterns encapsulating knowledge gained from multiple developments, applies most easily to the discussion of strong emergence above. In this sense, those properties which can be observed but cannot be related to design choices are not relevant to a systems approach. However, they can have value when dealing with a combination of engineering and managed problems which occur for system of systems contexts (Sillitto 2010). (See Systems Approach Applied to Engineered Systems.)

References

Works Cited

Abbott, R. 2006. "Emergence Explained: Getting Epiphenomena to Do Real Work". Complexity. 12(1) (September-October): 13-26.

Bedau, M.A. and P. Humphreys, P. (eds.). 2008. "Emergence" In Contemporary Readings in Philosophy and Science. Cambridge, MA, USA: The MIT Press.

Boccara, N. 2004. Modeling Complex Systems. New York: Springer-Verlag.

Checkland, P. 1999. Systems Thinking, Systems Practice. New York, NY, USA: John Wiley & Sons.

Chroust. G. 2002. "Emergent Properties in Software Systems." 10th Interdisciplinary Information Management Talks; Hofer, C. and Chroust, G. (eds.). Verlag Trauner Linz, pages 277-289.

Chroust, G., C. Hofer, C. Hoyer (eds.). 2005. The Concept of Emergence in Systems Engineering." The 12th Fuschl Conversation, April 18-23, 2004, Institute for Systems Engineering and Automation, Johannes Kepler University Linz. pp. 49-60.

Emmeche, C., S. Koppe, and F. Stjernfelt. 1997. "Explaining Emergence: Towards an Ontology of Levels." Journal for General Philosophy of Science. 28: 83-119 (1997). Accessed December 3 2014 at Claus Emmeche http://www.nbi.dk/~emmeche/coPubl/97e.EKS/emerg.html.

Francois, C. 2004. International Encyclopedia of Systems and Cybernetics, 2nd edition, 2 volumes. K.G.Saur, Munchen.

Hitchins, D. 2007. Systems Engineering: A 21st Century Systems Methodology. Hoboken, NJ, USA: John Wiley & Sons.

Honderich. T. 1995. The Oxford Companion to Philosophy. New York: Oxford University Press.

Hybertson, D. 2009. Model-Oriented Systems Engineering Science: A Unifying Framework for Traditional and Complex Systems. Auerbach/CRC Press, Boca Raton, FL.

Jackson, S., D. Hitchins, and H. Eisner. 2010. "What is the Systems Approach?" INCOSE Insight. 13(1) (April 2010): 41-43.

O’Connor, T. and H. Wong. 2006. "Emergent Properties". Stanford Encyclopedia of Philosophy. Accessed December 3 2014 at Stanford Encyclopedia of Philosophy http://plato.stanford.edu/entries/properties-emergent/.

Page, S.E. 2009. Understanding Complexity. The Great Courses. Chantilly, VA, USA: The Teaching Company.

Ryan, A. 2007. "Emergence is Coupled to Scope, Not Level." Complexity. 13(2) (November-December).

Sillitto, H.G. 2010. "Design Principles for Ultra-Large-Scale Systems". Proceedings of the 20th Annual International Council on Systems Engineering (INCOSE) International Symposium, July 2010, Chicago, IL, USA, reprinted in “The Singapore Engineer”, April 2011.

US-Canada Power System Outage Task Force. 2004. Final Report on the August 14, 2003 Blackout in the United States and Canada: Causes and Recommendations. April, 2004. Washington-Ottawa. Accessed December 3 2014 at US Department of Energy http://energy.gov/oe/downloads/blackout-2003-final-report-august-14-2003-blackout-united-states-and-canada-causes-and

Primary References

Emmeche, C., S. Koppe, and F. Stjernfelt. 1997. "Explaining Emergence: Towards an Ontology of Levels." Journal for General Philosophy of Science, 28: 83-119 (1997). http://www.nbi.dk/~emmeche/coPubl/97e.EKS/emerg.html.

Hitchins, D. 2007. Systems Engineering: A 21st Century Systems Methodology. Hoboken, NJ, USA: John Wiley & Sons.

Page, S. E. 2009. Understanding Complexity. The Great Courses. Chantilly, VA, USA: The Teaching Company.

Additional References

Sheard, S.A. and A. Mostashari. 2008. "Principles of Complex Systems for Systems Engineering." Systems Engineering. 12: 295-311.


< Previous Article | Parent Article | Next Article >
SEBoK v. 2.1, released 31 October 2019