Difference between pages "System Integration" and "Developing Individuals"

From SEBoK
(Difference between pages)
Jump to: navigation, search
(Robot improving glossary links)
 
m (Text replacement - "<center>'''SEBoK v. 2.0, released 1 June 2019'''</center>" to "<center>'''SEBoK v. 2.1, released 31 October 2019'''</center>")
 
Line 1: Line 1:
{{Term|Integration (glossary)|System integration}} consists of taking delivery of the implemented {{Term|System Element (glossary)|system elements}} which compose the {{Term|System-of-Interest (glossary)|system-of-interest (SoI)}}, assembling these implemented elements together, and performing the {{Term|Verification and Validation Action (glossary)|verification and validation actions}} (V&V actions) in the course of the assembly. The ultimate goal of system integration is to ensure that the individual system elements function properly as a whole and satisfy the design properties or characteristics of the system. System integration is one part of the realization effort and relates only to developmental items. Integration should not to be confused with the assembly of end products on a production line. To perform the production, the assembly line uses a different order from that used by integration.  
+
Developing each individual’s {{Term|Systems Engineering (glossary)|systems engineering}} (SE) {{Term|Competency (glossary)|competencies(glossary)}} is a key aspect of [[Enabling Individuals|enabling individuals]]. The goal may be to develop competency in a broad range of SE competencies or a single aspect of SE, and it is important to know exactly which SE competencies are desired. This article describes strategies to develop SE competencies in individuals.
  
==Definition and Purpose==
+
==Closing Competency Gaps==
System integration consists of a process that “''iteratively combines implemented system elements to form complete or partial system configurations in order to build a product or service. It is used recursively for successive levels of the system hierarchy''.” (ISO/IEC 15288 2015, 68). The process is extended to any kind of {{Term|Product System (glossary)|product system}}, {{Term|Service System (glossary)|service system}}, and {{Term|Enterprise System (glossary)|enterprise system}}. The purpose of system integration is to prepare the SoI for final validation and transition either for use or for production. Integration consists of progressively assembling aggregates of implemented elements that compose the SoI as architected during design, and to check correctness of static and dynamic aspects of interfaces between the implemented elements.  
+
Delivering excellent systems that fulfill customer needs is the primary goal of the organization. Developing ''the capability'' to deliver such systems is a secondary goal, and while necessary, is not sufficient. To attain both of these goals, the organization must assess itself and effect a strategy to identify and close competency gaps.
  
The U.S. Defense Acquisition University (DAU) provides the following context for integration: ''The integration process will be used . . . for the incorporation of the final system into its operational environment to ensure that the system is integrated properly into all defined external interfaces. The interface management process is particularly important for the success of the integration process, and iteration between the two processes will occur'' (DAU 2010).
+
To identify competency gaps, an organization may take two basic steps:  
 +
#Listing desired competencies, as discussed in [[Roles and Competencies]]; and
 +
#Assessing the competencies of individual systems engineers, as discussed in [[Assessing Individuals]].
  
The purpose of system integration can be summarized as below:
+
Models useful for listing competencies include the International Council on Systems Engineering (INCOSE) United Kingdom Advisory Board model (Cowper et al. 2005; INCOSE 2010), the ENG Competency Model (DAU 2013), and the Academy of Program/Project & Engineering Leadership (APPEL 2009) model (Menrad and Lawson 2008).
* Completely assemble the implemented elements to make sure that the they are compatible with each other.  
 
* Demonstrate that the aggregates of implemented elements perform the expected functions and meet measures of performance/effectiveness.
 
* Detect defects/faults related to design and assembly activities by submitting the aggregates to focused V&V actions.  
 
  
Note: In the systems engineering literature, sometimes the term ''integration'' is used in a larger context than in the present topic. In this larger sense, it concerns the technical effort to simultaneously design and develop the system and the processes for developing the system through concurrent consideration of all life cycle stages, needs, and competences. This approach requires the "integration" of numerous skills, activities, or processes.
+
Once the organization knows the SE competencies it needs to develop to close the competency gaps it has identified, it may choose from the several methods (Davidz and Martin 2011) outlined in the table below.  
  
==Principles==
+
<center>'''Table 1. SE Competency Development Framework.''' (SEBoK Original)</center>
 +
<center>
 +
<table border="1"><tr>
  
===Boundary of Integration Activity===
+
<tr><td><b>Goal</b></td>
Integration can be understood as the whole bottom-up branch of the Vee Model, including the tasks of assembly and the appropriate verification tasks. See Figure 1 below:
+
<td><b>Objective</b></td>
+
<td><b>Method</b></td></tr>
[[File:Limits_of_integration_activities.png|thumb|300px|center|'''Figure 1. Limits of Integration Activities.''' (SEBoK Original)]]
 
  
The assembly activity joins together, and physically links, the implemented elements. Each implemented element is individually verified and validated prior to entering integration. Integration then adds the verification activity to the assembly activity, excluding the final validation.
+
<td rowspan="2" width="25%"><b><center>PRIMARY GOAL = Delivery of excellent systems to fulfill customer needs</center></b></td>
  
The final validation performs operational tests that authorize the transition for use or the transition for production. Remember that system integration only endeavors to obtain pre-production prototypes of the concerned product, service, or enterprise. If the product, service, or enterprise is delivered as a unique exemplar, the final validation activity serves as acceptance for delivery and transfer for use. If the prototype has to be produced in several exemplars, the final validation serves as acceptance to launch their production. The definition of the optimized operations of assembly which will be carried out on a production line relates to the manufacturing process and not to the integration process.
+
<td>Focus on successful performance outcome</td>
 +
<td>Corporate intiatives</td>
 +
<tr>
 +
<td>Focus on performance of project team</td>
 +
<td>Team coaching of project team for performance enhancement</td>
 +
</tr>
  
Integration activity can sometimes reveal issues or anomalies that require modifications of the design of the system. Modifying the design is not part of the integration process but concerns only the design process. Integration only deals with the assembly of the implemented elements and verification of the system against its properties as designed. During assembly, it is possible to carry out tasks of finishing touches which require simultaneous use of several implemented elements (e.g., paint the whole after assembly, calibrate a biochemical component, etc.). These tasks must be planned in the context of integration and are not carried out on separate implemented elements and do not include modifications related to design.
+
<tr>
 +
<td rowspan="19" width="25%"><b><center>SECONDARY GOAL = Competency to deliver excellent systems to fulfill customer needs</center></b></td>
  
===Aggregation of Implemented Elements===
+
<td rowspan="9">Develop individual competency</td>
The integration is used to systematically assemble a higher-level system from lower-level ones (implemented system elements) that have been implemented. Integration often begins with analysis and {{Term|Simulation (glossary)|simulations}} (e.g., various types of prototypes) and progresses through increasingly more realistic systems and system elements until the final product, service, or enterprise is achieved. 
+
<td>Training courses</td>
 +
</tr><tr>
 +
<td>Job rotation</td>
 +
</tr><tr>
 +
<td>Mentoring</td>
 +
</tr><tr>
 +
<td>Hands-on experience</td>
 +
</tr><tr>
 +
<td>Develop a few hand-picked individuals</td>
 +
</tr><tr>
 +
<td>University educational degree program</td>
 +
</tr><tr>
 +
<td>Customized educational program</td>
 +
</tr><tr>
 +
<td>Combination program - education, training, job rotation, mentoring, hands-on experience</td>
 +
</tr><tr>
 +
<td>Course certificate program</td>
  
System integration is based on the notion of an {{Term|Aggregate (glossary)|aggregate}} - a subset of the system made up of several implemented elements (implemented system elements and physical interfaces) on which a set of V&V actions is applied. Each aggregate is characterized by a configuration which specifies the implemented elements to be physically assembled and their configuration status.
+
</tr><tr>
 +
<td>Ensure individual competency through certification</td>
 +
<td>Certification program</td></tr>
  
To perform V&V actions, a {{Term|Verification and Validation Configuration (glossary)|V&V configuration}} that includes the aggregate plus {{Term|Verification and Validation Tool (glossary)|V&V tools}} is constituted. The V&V tools are enabling products and can be simulators (simulated implemented elements), stubs or caps, activators (launchers, drivers), harness, measuring devices, etc.
+
<tr>
 +
<td>Filter those working in systems roles</td>
 +
<td>Use individual characteristics to select employees for systems roles</td></tr>
  
===Integration by Level of System===
+
<tr>
According to the Vee Model, system definition (top-down branch) is done by successive levels of decomposition; each level corresponds to the physical architecture of systems and system elements. The integration (bottom-up branch) takes the opposite approach of composition (i.e., a level by level approach). On a given level, integration is done on the basis of the physical architecture defined during {{Term|System Definition (glossary)|system definition}}.
+
<td>Ensure organizational competency through certification</td>
 +
<td>ISO 9000</td></tr>
  
===Integration Strategy===
+
<tr><td rowspan="7">Develop organizational systems competency through processes</td>
The integration of implemented elements is generally performed according to a predefined strategy. The definition of the integration strategy is based on the architecture of the system and relies on the way the architecture of the system has been designed. The strategy is described in an integration plan that defines the minimum configuration of expected aggregates, the order of assembly of these aggregates in order to support efficient subsequent verification and validation actions (e.g., inspections and/or testing), techniques to check or evaluate interfaces, and necessary capabilities in the integration environment to support combinations of aggregates. The integration strategy is thus elaborated starting from the selected verification and validation strategy. See the [[System Verification]] and [[System Validation]] topics.
 
  
To define an integration strategy, there are several possible integration approaches/techniques that may be used individually or in combination. The selection of integration techniques depends on several factors; in particular, the type of system element, delivery time, order of delivery, risks, constraints, etc. Each integration technique has strengths and weaknesses which should be considered in the context of the SoI. Some integration techniques are summarized in Table 1 below.
+
<td>Process improvement using an established framework</td>
 +
</tr><tr>
 +
<td>Concept maps to identify the thought processes of senior systems engineers</td>
 +
</tr><tr>
 +
<td>Standarize systems policies and procedures for consistency</td>
 +
</tr><tr>
 +
<td>Systems engineering web portal</td>
 +
</tr><tr>
 +
<td>Systems knowledge management repository</td>
 +
</tr><tr>
 +
<td>On-call organizational experts</td>
 +
</tr><tr>
 +
<td>Rotating professor who works at company part-time and is at university part-time</td>
  
{|
+
</tr>
|+'''Table 1. Integration Techniques.''' (SEBoK Original)
+
</table>
!Integration Technique
+
</center>
!Description
 
|-
 
|'''Global Integration'''
 
|Also known as ''big-bang integration''; all the delivered implemented elements are assembled in only one step.
 
* This technique is simple and does not require simulating the implemented elements not being available at that time.
 
* Difficult to detect and localize faults; interface faults are detected late.
 
* Should be reserved for simple systems, with few interactions and few implemented elements without technological risks.
 
|-
 
|'''Integration "with the Stream"'''
 
|The delivered implemented elements are assembled as they become available.
 
* Allows starting the integration quickly.
 
* Complex to implement because of the necessity to simulate the implemented elements not yet available. Impossible to control the end-to-end "functional chains"; consequently, global tests are postponed very late in the schedule.
 
* Should be reserved for well known and controlled systems without technological risks.
 
|-
 
|'''Incremental Integration'''
 
|In a predefined order, one or a very few implemented elements are added to an already integrated increment of implemented elements.
 
* Fast localization of faults: a new fault is usually localized in lately integrated implemented elements or dependent of a faulty interface.
 
* Require simulators for absent implemented elements. Require many test cases, as each implemented element addition requires the verification of the new configuration and regression testing.
 
* Applicable to any type of architecture.
 
|-
 
|'''Subsets Integration'''
 
|Implemented elements are assembled by subsets, and then subsets are assembled together (a subset is an aggregate); could also be called "functional chains integration".
 
* Time saving due to parallel integration of subsets; delivery of partial products is possible. Requires less means and fewer test cases than integration by increments.
 
* Subsets shall be defined during the design.
 
* Applicable to architectures composed of sub-systems.
 
|-
 
|'''Top-Down Integration'''
 
|Implemented elements or aggregates are integrated in their activation or utilization order.
 
* Availability of a skeleton and early detection of architectural faults, definition of test cases close to reality, and the re-use of test data sets possible.
 
* Many stubs/caps need to be created; difficult to define test cases of the leaf-implemented elements (lowest level).
 
* Mainly used in software domain. Start from the implemented element of higher level; implemented elements of lower level are added until leaf-implemented elements.
 
|-
 
|'''Bottom-Up Integration'''
 
|Implemented elements or aggregates are integrated in the opposite order of their activation or utilization.
 
* Easy definition of test cases - early detection of faults (usually localized in the leaf-implemented elements); reduce the number of simulators to be used. An aggregate can be a sub-system.
 
* Test cases shall be redefined for each step, drivers are difficult to define and realize, implemented elements of lower levels are "over-tested", and does not allow to quickly detecting the architectural faults.
 
* Mainly used in software domain and in any kind of system.
 
|-
 
|'''Criterion Driven Integration'''
 
|The most critical implemented elements compared to the selected criterion are first integrated (dependability, complexity, technological innovation, etc.). Criteria are generally related to risks.
 
* Allow testing early and intensively critical implemented elements; early verification of design choices.
 
* Test cases and test data sets are difficult to define.
 
|}
 
  
Usually, a mixed integration technique is selected as a trade-off between the different techniques listed above, allowing optimization of work and adaptation of the process to the system under development. The optimization takes into account the realization time of the implemented elements, their delivery scheduled order, their level of complexity, the technical risks, the availability of assembly tools, cost, deadlines, specific personnel capability, etc.
+
===System Delivery===
  
==Process Approach==
+
Some organizations mount initiatives which focus directly on successful system delivery. Others focus on project team performance, in some cases by offering coaching, as a means to ensure successful system delivery.
  
===Activities of the Process===
+
One example of the latter approach is the performance enhancement service of the US National Aeronautics and Space Administration (NASA) Academy of Program/Project & Engineering Leadership (APPEL), which assesses team performance and then offers developmental interventions with coaching (NASA 2010).
  
Major activities and tasks performed during this process include
+
Organizations pursue multiple paths towards developing the capability to deliver excellent systems, including
 +
*developing the competency of individuals;
 +
*developing the competency of the organization through processes (Davidz and Maier 2007); and
 +
*putting measures should in place to verify the efficacy of the selected methods.
  
*'''Establishing the integration plan''' (this activity is carried out concurrently to the design activity of the system) that defines:
+
===Individual Competency===
** The optimized integration strategy: order of aggregates assembly using appropriate integration techniques.
 
** The V&V actions to be processed for the purpose of integration.
 
** The configurations of the aggregates to be assembled and verified.
 
** The integration means and verification means (dedicated enabling products) that may include {{Term|Assembly Procedure (glossary)|assembly procedures}}, {{Term|Assembly Tool (glossary)|assembly tools}} (harness, specific tools), V&V tools (simulators, stubs/caps, launchers, test benches, devices for measuring, etc.), and {{Term|Verification and Validation Procedure (glossary)|V&V procedures}}.
 
* '''Obtain the integration means''' and verification means as defined in the integration plan; the acquisition of the means can be done through various ways such as procurement, development, reuse, and sub-contracting; usually the acquisition of the complete set of means is a mix of these methods.
 
* '''Take delivery''' of each implemented element:
 
** Unpack and reassemble the implemented element with its accessories.
 
** Check the delivered configuration, conformance of implemented elements, compatibility of interfaces, and ensure the presence of mandatory documentation.
 
* '''Assemble the implemented elements''' into aggregates:
 
** Gather the implemented elements to be assembled, the integration means (assembly tools, assembly procedures), and the verification means (V&V tools and procedures).
 
** Connect the implemented elements on each other to constitute aggregates in the order prescribed by the integration plan and in assembly procedures using assembly tools.
 
** Add or connect the V&V tools to the aggregates as predefined.
 
** Carry out eventual operations of welding, gluing, drilling, tapping, adjusting, tuning, painting, parametering, etc.
 
* '''Verify each aggregate''':
 
** Check the aggregate is correctly assembled according to established procedures.
 
** Perform the verification process that uses verification and validation procedures and check that the aggregate shows the right design properties/specified requirements.
 
** Record integration results/reports and potential issue reports, change requests, etc.
 
  
===Artifacts and Ontology Elements===
+
An organization may choose a combination of methods to develop individual systems competency. General Electric’s Edison Engineering Development Program (GE 2010) and Lockheed Martin’s Leadership Development Programs (Lockheed Martin 2010) are examples among the many combination programs offered within companies. 
  
This process may create several artifacts such as
+
Whether or not the program is specifically oriented to develop systems skills, the breadth of technical training and experience, coupled with business training, can produce a rich understanding of systems for the participant. Furthermore, new combination programs can be designed to develop specific systems-oriented skills for an organization. 
  
* an integrated system
+
Methods for developing individual competency include
* assembly tools
 
* assembly procedures
 
* integration plans
 
* integration reports
 
* issue/anomaly/trouble reports
 
* change requests (about design)
 
  
This process utilizes the ontology elements discussed in Table 2.
+
*'''classroom or online training courses''', a traditional choice for knowledge transfer and skill acquisition. Here, an instructor directs a classroom of participants. The method of instruction may vary from a lecture format to case study work to hands-on exercises.  The impact and effectiveness of this method varies considerably based on the skill of the instructor, the effort of the participants, the presentation of the material, the course content, the quality of the course design process, and the matching of the course material to organizational needs.  These types of interventions may also be given online. Squires (2011) investigates the relationship between online pedagogy and student perceived learning of SE competencies.
 +
*'''job rotation''', where a participant rotates through a series of work assignments that cut across different aspects of the organization to gain broad experience in a relatively short time.
 +
*'''mentoring''', where a more experienced individual is paired with a protégé in a developmental relationship. Many organizations use mentoring, whose impact and effectiveness vary considerably. Success factors are the tenable pairing of individuals, and the provision of adequate time for mentoring.
 +
*'''hands-on experience''', where organizations provide for their engineers to get hands-on experience that they would otherwise lack.  A research study by Davidz on enablers and barriers to the development of systems thinking showed that systems thinking is developed primarily by experiential learning (Davidz 2006; Davidz and Nightingale 2008, 1-14). As an example, some individuals found that working in a job that dealt with the full system, such as working in an integration and test environment, enabled development of systems thinking.
 +
*'''selecting individuals''' who appear to have high potential and focusing on their development. Hand-selection may or may not be accompanied by the other identified methods.
 +
*'''formal education''', such as a university degree program. A growing number of SE degree programs are offered worldwide (Lasfer and Pyster 2011). Companies have also worked with local universities to set up customized educational programs for their employees.  The company benefits because it can tailor the educational program to the unique needs of its business. In a certificate program, individuals receive a certificate for taking a specific set of courses, either at a university or as provided by the company. There are a growing number of certificate programs for developing systems competency.
  
{|
+
====Individual Certification====
|+'''Table 2. Main Ontology Elements as Handled within System Integration.''' (SEBoK Original)
 
!Element
 
!Definition
 
----
 
Attributes
 
|-
 
|'''Aggregate'''
 
|An aggregate is a subset of the system made up of several system elements or systems on which a set of verification actions is applied.
 
----
 
Identifier, name, description
 
|-
 
|'''Assembly Procedure'''
 
|An assembly procedure groups a set of elementary assembly actions to build an aggregate of implemented system elements.
 
----
 
Identifier, name, description, duration, unit of time
 
|-
 
|'''Assembly Tool'''
 
|An assembly tool is a physical tool used to connect, assemble, or link several implemented system elements to build aggregates (specific tool, harness, etc.).
 
----
 
Identifier, name, description
 
|-
 
|'''Risk'''
 
|An event having a probability of occurrence and a gravity degree on its consequence onto the system mission or on other characteristics (used for technical risk in engineering). A risk is the combination of vulnerability and of a danger or a threat.
 
----
 
Identifier, name, description, status
 
|-
 
|'''Rationale'''
 
|An argument that provides the justification for the selection of an engineering element.
 
----
 
Identifier, name, description (rational, reasons for defining an aggregate, assembly procedure, assembly tool)
 
|}
 
  
Note: verification and validation ontology elements are described in the [[System Verification]] and [[System Validation]] topics.
+
Organizations may seek to boost individual systems competency through certification programs. These can combine work experience, educational background, and training classes. Certifications are offered by local, national, and international professional bodies.  
  
The main relationships between ontology elements are presented in Figure 2.
+
SE organizations may encourage employees to seek certification from the International Council on Systems Engineering (INCOSE 2011) or may use this type of certification as a filter (see '''Filters''', below). In addition, many companies have developed their own internal certification measures. For example, the Aerospace Corporation has an Aerospace Systems Architecting and Engineering Certificate Program (ASAECP). (Gardner 2007)
  
[[File:SEBoKv05_KA-SystRealiz_Integration_relationships.png|thumb|600px|center|'''Figure 2. Integration Elements Relationships with Other Engineering Elements.''' (SEBoK Original)]]
+
====Filters====
  
===Checking and Correctness of Integration===
+
Another approach to developing individual competency is to select employees for systems roles based on certain characteristics, or filters. Before using a list of characteristics for filtering, though, an organization should critically examine
The main items to be checked during the integration process include the following:
+
#how the list of individual characteristics was determined, and
 +
#how the characteristics identified enable the performance of a systems job. 
  
* The integration plan respects its template.
+
Characteristics used as filters should
* The expected assembly order (integration strategy) is realistic.
+
*enable one to perform a systems job
* No system element and physical interface set out in the system design document is forgotten.
+
*be viewed as important to perform a systems job, or
* Every interface and interaction between implemented elements is verified.
+
*be necessary to perform a systems job.
* Assembly procedures and assembly tools are available and validated prior to beginning the assembly.
 
* V&V procedures and tools are available and validated prior to beginning the verification.
 
* Integration reports are recorded.
 
  
===Methods and Techniques===
+
A necessary characteristic is much stronger than an enabling one, and before filtering for certain traits, it is important to understand whether the characteristic is an enabler or a necessity.
Several different approaches are summarized above in the section [http://sebokwiki.org/1.0.1/index.php?title=System_Integration#Integration_Strategy Integration Strategy] (above) that may be used for integration, yet other approaches exist. In particular, important integration strategies for intensive software systems include: vertical integration, horizontal integration, and star integration.
 
  
====Coupling Matrix and N-squared Diagram====
+
Finally, it is important to understand the extent to which findings are generally applicable, since a list of characteristics that determine success in one organization may not be generalizable to another organization.
One of the most basic methods to define the aggregates and the order of integration would be the use of N-Squared diagrams (Grady 1994, 190).
 
  
In the integration context, the coupling matrices are useful for optimizing the aggregate definition and verification of interfaces:
+
===Organizational Capability===
 +
Once an organization has determined which SE capabilities are mission critical (please see [[Deciding on Desired Systems Engineering Capabilities within Businesses and Enterprises]]), there are many different ways in which an organization can seek to develop or improve these capabilities.  Some approaches seen in the literature include the following:
  
* The integration strategy is defined and optimized by reorganizing the coupling matrix in order to group the implemented elements in aggregates, thus minimizing the number of interfaces to be verified between aggregates (see Figure 3).
+
*Organizations may choose to develop organizational systems capability through processes.  One method organizations may choose is to pursue process improvement using an established framework.  An example is the Capability Maturity Model® Integration (CMMI) process improvement approach (SEI 2010, 1).
 +
*Concept maps - graphical representations of engineering thought processes - have been shown to be an effective method of transferring knowledge from senior engineering personnel to junior engineering personnel (Kramer 2007, 26-29; Kramer 2005).  These maps may provide a mechanism for increasing knowledge of the systems engineering population of an organization.
 +
*An organization may also choose to develop organizational systems competencies by standardizing systems policies and procedures.  An example from NASA is their ''NASA Systems Engineering Processes and Requirement'' (NASA 2007).
 +
*Some organizations use a web portal to store and organize applicable systems engineering knowledge and processes, which assists in developing organizational systems competency.  An example is the Mission Assurance Portal for the Aerospace Corporation (Roberts et al. 2007, 10-13).
 +
*Another approach being considered in the community is the development of a rotating professor role, where the person would work at the company and then be at a university to strengthen the link between academia and industry.
 +
*Another approach is to alter organizational design to foster and mature a desired competency.  For example, an organization that identifies competency in the area of reliability as critical to its SE success may develop a reliability group, which will help foster growth and improvement in reliability competencies.
  
[[File:JS_Figure_9.png|thumb|600px|center|'''Figure 3. Initial Arrangement of Aggregates on the Left; Final Arrangement After Reorganization on the Right.''' (SEBoK Original)]]
+
====Organizational Certification====
  
* When verifying the interactions between aggregates, the matrix is an aid tool for fault detection. If by adding an implemented element to an aggregate an error is detected, the fault can be either related to the implemented element, to the aggregate, or to the interfaces. If the fault is related to the aggregate, it can relate to any implemented element or any interface between the implemented elements internal to the aggregate.
+
Certification at the organizational level exists also, and can be a means for ensuring competency. ISO certification is one example (ISO 2010). Before taking this approach, the organization should verify that the capabilities required by the certification are indeed the systems capabilities it seeks. For more on determining appropriate organizational capabilities, see [[Deciding on Desired Systems Engineering Capabilities within Businesses and Enterprises]].
  
===Application to Product Systems, Service Systems, and Enterprise Systems===
+
====Repositioning the Product Life Cycle====
As the nature of implemented system elements and physical interfaces is different for these types of systems, the aggregates, the assembly tools, and the V&V tools are different. Some integration techniques are more appropriate to specific types of systems. Table 3 below provides some examples.
 
  
{|
+
An organization may also choose to reposition its product life cycle philosophy to maintain system competency. For example, NASA has done this with its APPEL program (APPEL 2009).  
|+'''Table 3. Different Integration Elements for Product, Service, and Enterprise Systems.''' (SEBoK Original)
 
!Element
 
!Product System
 
!Service System
 
!Enterprise System
 
|-
 
|'''System Element'''
 
|Hardware Parts (mechanics, electronics, electrical, plastic, chemical, etc.)
 
  
Operator Roles
+
Since the systems competencies of individuals are primarily developed through experiential learning, providing experiential learning opportunities is critical. Shortening the product life cycle is one way to ensure that individuals acquire the full range of desired competency sooner.
  
Software Pieces
+
==Maintaining Competency Plans==
|Processes, data bases, procedures, etc.
 
  
Operator Roles
+
An organization that has developed an SE competency plan should consider how to maintain it. How, and how often, will the competency plan be re-examined and updated? The maintenance process should account for the ongoing evolution of global contexts, business strategies, and the SEBoK. The process for assessing competencies and taking action to improve them must be part of the normal operations of the organization and should occur periodically.
  
Software Applications
+
==References==
|Corporate, direction, division, department, project, technical team, leader, etc.
 
  
IT components
+
===Works Cited===
|-
+
Academy of Program/Project & Engineering Leadership (APPEL). 2009. [[NASA's Systems Engineering Competencies]]. Washington, D.C.: U.S. National Aeronautics and Space Association. Accessed on September 15, 2011. Available at http://www.nasa.gov/offices/oce/appel/pm-development/pm_se_competency_framework.html.
|'''Physical Interface'''
 
|Hardware parts, protocols, procedures, etc.
 
|Protocols, documents, etc.
 
|Protocols, procedures, documents, etc.
 
|-
 
|'''Assembly Tools'''
 
|Harness, mechanical tools, specific tools
 
  
Software Linker
+
Cowper, D., S. Bennison, R. Allen-Shalless, K. Barnwell, S. Brown, A. El Fatatry, J. Hooper, S. Hudson, L. Oliver, and A. Smith. 2005. ''Systems Engineering Core Competencies Framework.'' Folkestone, UK: International Council on Systems Engineering (INCOSE) UK Advisory Board (UKAB).
|Documentation, learning course, etc.
 
|Documentation, learning, moving of office
 
|-
 
|'''Verification Tools'''
 
|Test bench, simulator, launchers, stub/cap
 
|Activity/scenario models, simulator, human roles rehearsal, computer, etc.
 
  
Skilled Experts
+
Davidz, H.L. and J. Martin. 2011. "[[Defining a Strategy for Development of Systems Capability in the Workforce]]". ''Systems Engineering''. 14(2): 141-143.
|Activity/scenario models, simulator, human roles rehearsal
 
|-
 
|'''Validation Tools'''
 
|Operational environment
 
|Operational environment
 
|Operational environment
 
|-
 
|'''Recommended Integration Techniques'''
 
|Top down integration technique
 
  
Bottom Up Integration technique
+
Davidz, H.L. and M.W. Maier. 2007.  "[[An Integrated Approach to Developing Systems Professionals]]."  Paper presented at the 17th Annual International Council on Systems Engineering (INCOSE) International Symposium, 24-28 June 2007. San Diego, CA, USA.
|Subsets integration technique (functional chains)
 
|Global integration technique
 
  
Incremental integration
+
Davidz, H.L., and D. Nightingale. 2008. "Enabling Systems Thinking to Accelerate the Development of Senior Systems Engineers." ''Systems Engineering''. 11(1): 1-14.
|}
 
  
===Practical Considerations===
+
Davidz, H.L. 2006. ''Enabling Systems Thinking to Accelerate the Development of Senior Systems Engineers.'' Dissertation. Massachusetts Institute of Technology (MIT), Cambridge, MA, USA.
Key pitfalls and good practices related to system integration are described in the next two sections.
 
  
===Pitfalls===
+
Gardner, B. 2007.  "A Corporate Approach to National Security Space Education."  ''Crosslink,''  the Aerospace Corporation Magazine of Advances in Aerospace Technology. 8(1) (Spring 2007):10-5.  Accessed April 23, 2013.  Available at: http://aerospace.wpengine.netdna-cdn.com/wp-content/uploads/crosslink/V8N1.pdf.
Some of the key pitfalls encountered in planning and performing SE Measurement are provided in Table 4.
 
  
{|
+
GE. 2010. ''Edison Engineering Development Program (EEDP) in General Electric.'' Accessed on September 15, 2011. Available at http://www.gecareers.com/GECAREERS/jsp/us/studentOpportunities/leadershipPrograms/eng_program_guide.jsp.
|+'''Table 4. Major Pitfalls with System Integration.''' (SEBoK Original)
 
!Pitfall
 
!Description
 
|-
 
|'''What is expected has delay'''
 
|The experience shows that the implemented elements always do not arrive in the expected order and the tests never proceed or result as foreseen; therefore, the integration strategy should allow a great flexibility.
 
|-
 
|'''Big-bang not appropriate'''
 
|The "big-bang" integration technique is not appropriate for a fast detection of faults. It is thus preferable to verify the interfaces progressively all along the integration.
 
|-
 
|'''Integration plan too late'''
 
|The preparation of the integration activities is planned too late in the project schedule, typically when first implemented elements are delivered.
 
|}
 
  
===Good Practices===
+
INCOSE. 2010. ''[[Systems Engineering Competencies Framework 2010-0205]].'' San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2010-003.
Some good practices, gathered from the references are provided in Table 5.
+
 +
INCOSE. 2011.  "Systems Engineering Professional Certification."  In ''International Council on Systems Engineering'' online.  Accessed April 13, 2015.  Available at:  http://www.incose.org/certification/.
  
{|
+
Kramer, M.J. 2007. "Can Concept Maps Bridge The Engineering Gap?" ''Crosslink'', the Aerospace Corporation Magazine of Advances in Aerospace Technology. 8(1) (Spring 2007): 26-9. Accessed April 23, 2013. Available at: http://aerospace.wpengine.netdna-cdn.com/wp-content/uploads/crosslink/V8N1.pdf.
|+'''Table 5. Proven Practices with System Integration.''' (SEBoK Original)
 
!Practice
 
!Description
 
|-
 
|'''Start earlier development of means'''
 
|The development of assembly tools and verification and validation tools can be as long as the system itself. It should be started as early as possible as soon as the preliminary design is nearly frozen.
 
|-
 
|'''Integration means seen as enabling systems'''
 
|The development of integration means (assembly tools, verification, and validation tools) can be seen as enabling systems, using system definition and system realization processes as described in this SEBoK, and managed as projects. These projects can be led by the project of the corresponding system-of-interest, but assigned to specific system blocks, or can be subcontracted as separate projects.
 
|-
 
|'''Use coupling matrix'''
 
|A good practice consists in gradually integrating aggregates in order to detect faults more easily. The use of the coupling matrix applies for all strategies and especially for the bottom up integration strategy.
 
|-
 
|'''Flexible integration plan and schedule'''
 
|The integration process of complex systems cannot be easily foreseeable and its progress control difficult to observe. This is why it is recommended to plan integration with specific margins, using flexible techniques, and integrating sets by similar technologies.
 
|-
 
|'''Integration and design teams'''
 
|The integration responsible should be part of the design team.
 
|}
 
  
==References==
+
Kramer, M.J. 2005. ''Using Concept Maps for Knowledge Acquisition in Satellite Design: Translating 'Statement of Requirements on Orbit' to 'Design Requirements.'' Dissertation. Ft. Lauderdale, FL, USA: Graduate School of Computer and Information Sciences, Nova Southeastern University.
  
===Works Cited===
+
Lasfer, K. and A. Pyster. 2011. "The Growth of Systems Engineering Graduate Programs in the United States."  Paper presented at Conference on Systems Engineering Research, 15-16 April 2011. Los Angeles, CA, USA.
DAU. February 19, 2010. ''Defense Acquisition Guidebook (DAG)''. Ft. Belvoir, VA, USA: Defense Acquisition University (DAU)/U.S. Department of Defense (DoD).  
+
 +
Lockheed Martin. 2010. ''Training and Leadership Development Programs for College Applicants in Lockheed Martin Corporation.'' Bethesda, MD, USA. Accessed on August 30, 2012. Available at http://www.lockheedmartinjobs.com/leadership-development-program.asp.
  
Faisandier, A. 2012. ''Systems Architecture and Design''. Belberaud, France: Sinergy'Com.
+
NASA. 2010. ''Academy of Program/Project & engineering leadership (APPEL): Project life cycle support in U.S. National Aeronautics and Space Administration (NASA).'' Washington, DC, USA: U.S. National Air and Space Administration (NASA). Accessed on September 15, 2011. Available at http://www.nasa.gov/offices/oce/appel/performance/lifecycle/161.html.
  
ISO/IEC/IEEE. 2015.''[[ISO/IEC/IEEE 15288|Systems and Software Engineering - System Life Cycle Processes]].''Geneva, Switzerland: International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC), Institute of Electrical and Electronics Engineers.[[ISO/IEC/IEEE 15288]]:2015.
+
NASA. 2007. ''NASA Procedural Requirements: NASA Systems Engineering Processes and Requirements''. Washington, DC, USA: U.S. National Aeronautic and Space Administration (NASA). NPR 7123.1A.
  
===Primary References===
+
Roberts, J., B. Simpson, and S. Guarro. 2007. "A Mission Assurance Toolbox." ''Crosslink'', the Aerospace Corporation Magazine of Advances in Aerospace Technology. 8(2) (Fall 2007): 10-13.
INCOSE. 2010. ''[[INCOSE Systems Engineering Handbook|Systems Engineering Handbook]]: A Guide for Systems Life Cycle Processes and Activities''. Version 3.2. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2003-002-03.2.  
 
  
NASA. 2007. ''[[NASA Systems Engineering Handbook|Systems Engineering Handbook]].'' Washington, D.C.: National Aeronautics and Space Administration (NASA), NASA/SP-2007-6105.
+
SEI. 2007. ''Capability Maturity Model Integrated (CMMI) for Development'', version 1.2, Measurement and Analysis Process Area. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie Mellon University (CMU).
  
===Additional References===
+
Squires, A. 2011. ''Investigating the Relationship between Online Pedagogy and Student Perceived Learning of Systems Engineering Competencies''. Dissertation. Stevens Institute of Technology, Hoboken, NJ, USA.
Buede, D.M. 2009. ''The Engineering Design of Systems: Models and Methods.'' 2nd ed. Hoboken, NJ, USA: John Wiley & Sons Inc.  
 
  
DAU. 2010. ''Defense Acquisition Guidebook (DAG)''. Ft. Belvoir, VA, USA: Defense Acquisition University (DAU)/U.S. Department of Defense. February 19, 2010.
+
===Primary References===
 +
Academy of Program/Project & Engineering Leadership (APPEL). 2009. ''[[NASA's Systems Engineering Competencies]]''. Washington, DC, USA: U.S. National Aeronautics and Space Administration (NASA). Accessed on May 2, 2014. Available at http://appel.nasa.gov/career-resources/project-management-and-systems-engineering-competency-model/.
  
Gold-Bernstein, B. and W.A. Ruh. 2004. ''Enterprise integration: The essential guide to integration solutions''. Boston, MA, USA: Addison Wesley Professional.  
+
DAU. 2013. ''[[ENG Competency Model]]'', 12 June 2013 version. in Defense Acquisition University (DAU)/U.S. Department of Defense Database Online. Accessed on September 23, 2014. Available at https://acc.dau.mil/CommunityBrowser.aspx?id=657526&lang=en-US
  
Grady, J.O. 1994. ''System integration''. Boca Raton, FL, USA: CRC Press, Inc.  
+
Davidz, H.L. and J. Martin. 2011. "[[Defining a Strategy for Development of Systems Capability in the Workforce]]". ''Systems Engineering.'' 14(2): 141-143.  
  
Hitchins, D. 2009. "What are the General Principles Applicable to Systems?" INCOSE ''Insight'' 12(4):59-63.  
+
INCOSE. 2010. ''[[Systems Engineering Competencies Framework 2010-0205]]''. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2010-003.
  
Jackson, S. 2010. ''Architecting Resilient Systems: Accident Avoidance and Survival and Recovery from Disruptions''. Hoboken, NJ, USA: John Wiley & Sons.  
+
===Additional References===
 +
None.
  
Reason, J. 1997. ''Managing the Risks of Organizational Accidents.'' Aldershot, UK: Ashgate Publishing Limited.
 
 
----
 
----
<center>[[System Implementation|< Previous Article]] | [[System Realization|Parent Article]] | [[System Verification|Next Article >]]</center>
+
<center>[[Assessing Individuals|< Previous Article]] | [[Enabling Individuals|Parent Article]] | [[Ethical Behavior|Next Article >]]</center>
  
<center>'''SEBoK v. 2.0, released 1 June 2019'''</center>
+
<center>'''SEBoK v. 2.1, released 31 October 2019'''</center>
  
[[Category: Part 3]][[Category:Topic]]
+
[[Category: Part 5]][[Category:Topic]]
[[Category:System Realization]]
+
[[Category:Enabling Individuals]]

Revision as of 04:58, 19 October 2019

Developing each individual’s systems engineeringsystems engineering (SE) competencies(glossary)competencies(glossary) is a key aspect of enabling individuals. The goal may be to develop competency in a broad range of SE competencies or a single aspect of SE, and it is important to know exactly which SE competencies are desired. This article describes strategies to develop SE competencies in individuals.

Closing Competency Gaps

Delivering excellent systems that fulfill customer needs is the primary goal of the organization. Developing the capability to deliver such systems is a secondary goal, and while necessary, is not sufficient. To attain both of these goals, the organization must assess itself and effect a strategy to identify and close competency gaps.

To identify competency gaps, an organization may take two basic steps:

  1. Listing desired competencies, as discussed in Roles and Competencies; and
  2. Assessing the competencies of individual systems engineers, as discussed in Assessing Individuals.

Models useful for listing competencies include the International Council on Systems Engineering (INCOSE) United Kingdom Advisory Board model (Cowper et al. 2005; INCOSE 2010), the ENG Competency Model (DAU 2013), and the Academy of Program/Project & Engineering Leadership (APPEL 2009) model (Menrad and Lawson 2008).

Once the organization knows the SE competencies it needs to develop to close the competency gaps it has identified, it may choose from the several methods (Davidz and Martin 2011) outlined in the table below.

Table 1. SE Competency Development Framework. (SEBoK Original)
Goal Objective Method
PRIMARY GOAL = Delivery of excellent systems to fulfill customer needs
Focus on successful performance outcome Corporate intiatives
Focus on performance of project team Team coaching of project team for performance enhancement
SECONDARY GOAL = Competency to deliver excellent systems to fulfill customer needs
Develop individual competency Training courses
Job rotation
Mentoring
Hands-on experience
Develop a few hand-picked individuals
University educational degree program
Customized educational program
Combination program - education, training, job rotation, mentoring, hands-on experience
Course certificate program
Ensure individual competency through certification Certification program
Filter those working in systems roles Use individual characteristics to select employees for systems roles
Ensure organizational competency through certification ISO 9000
Develop organizational systems competency through processes Process improvement using an established framework
Concept maps to identify the thought processes of senior systems engineers
Standarize systems policies and procedures for consistency
Systems engineering web portal
Systems knowledge management repository
On-call organizational experts
Rotating professor who works at company part-time and is at university part-time

System Delivery

Some organizations mount initiatives which focus directly on successful system delivery. Others focus on project team performance, in some cases by offering coaching, as a means to ensure successful system delivery.

One example of the latter approach is the performance enhancement service of the US National Aeronautics and Space Administration (NASA) Academy of Program/Project & Engineering Leadership (APPEL), which assesses team performance and then offers developmental interventions with coaching (NASA 2010).

Organizations pursue multiple paths towards developing the capability to deliver excellent systems, including

  • developing the competency of individuals;
  • developing the competency of the organization through processes (Davidz and Maier 2007); and
  • putting measures should in place to verify the efficacy of the selected methods.

Individual Competency

An organization may choose a combination of methods to develop individual systems competency. General Electric’s Edison Engineering Development Program (GE 2010) and Lockheed Martin’s Leadership Development Programs (Lockheed Martin 2010) are examples among the many combination programs offered within companies.

Whether or not the program is specifically oriented to develop systems skills, the breadth of technical training and experience, coupled with business training, can produce a rich understanding of systems for the participant. Furthermore, new combination programs can be designed to develop specific systems-oriented skills for an organization.

Methods for developing individual competency include

  • classroom or online training courses, a traditional choice for knowledge transfer and skill acquisition. Here, an instructor directs a classroom of participants. The method of instruction may vary from a lecture format to case study work to hands-on exercises. The impact and effectiveness of this method varies considerably based on the skill of the instructor, the effort of the participants, the presentation of the material, the course content, the quality of the course design process, and the matching of the course material to organizational needs. These types of interventions may also be given online. Squires (2011) investigates the relationship between online pedagogy and student perceived learning of SE competencies.
  • job rotation, where a participant rotates through a series of work assignments that cut across different aspects of the organization to gain broad experience in a relatively short time.
  • mentoring, where a more experienced individual is paired with a protégé in a developmental relationship. Many organizations use mentoring, whose impact and effectiveness vary considerably. Success factors are the tenable pairing of individuals, and the provision of adequate time for mentoring.
  • hands-on experience, where organizations provide for their engineers to get hands-on experience that they would otherwise lack. A research study by Davidz on enablers and barriers to the development of systems thinking showed that systems thinking is developed primarily by experiential learning (Davidz 2006; Davidz and Nightingale 2008, 1-14). As an example, some individuals found that working in a job that dealt with the full system, such as working in an integration and test environment, enabled development of systems thinking.
  • selecting individuals who appear to have high potential and focusing on their development. Hand-selection may or may not be accompanied by the other identified methods.
  • formal education, such as a university degree program. A growing number of SE degree programs are offered worldwide (Lasfer and Pyster 2011). Companies have also worked with local universities to set up customized educational programs for their employees. The company benefits because it can tailor the educational program to the unique needs of its business. In a certificate program, individuals receive a certificate for taking a specific set of courses, either at a university or as provided by the company. There are a growing number of certificate programs for developing systems competency.

Individual Certification

Organizations may seek to boost individual systems competency through certification programs. These can combine work experience, educational background, and training classes. Certifications are offered by local, national, and international professional bodies.

SE organizations may encourage employees to seek certification from the International Council on Systems Engineering (INCOSE 2011) or may use this type of certification as a filter (see Filters, below). In addition, many companies have developed their own internal certification measures. For example, the Aerospace Corporation has an Aerospace Systems Architecting and Engineering Certificate Program (ASAECP). (Gardner 2007)

Filters

Another approach to developing individual competency is to select employees for systems roles based on certain characteristics, or filters. Before using a list of characteristics for filtering, though, an organization should critically examine

  1. how the list of individual characteristics was determined, and
  2. how the characteristics identified enable the performance of a systems job.

Characteristics used as filters should

  • enable one to perform a systems job
  • be viewed as important to perform a systems job, or
  • be necessary to perform a systems job.

A necessary characteristic is much stronger than an enabling one, and before filtering for certain traits, it is important to understand whether the characteristic is an enabler or a necessity.

Finally, it is important to understand the extent to which findings are generally applicable, since a list of characteristics that determine success in one organization may not be generalizable to another organization.

Organizational Capability

Once an organization has determined which SE capabilities are mission critical (please see Deciding on Desired Systems Engineering Capabilities within Businesses and Enterprises), there are many different ways in which an organization can seek to develop or improve these capabilities. Some approaches seen in the literature include the following:

  • Organizations may choose to develop organizational systems capability through processes. One method organizations may choose is to pursue process improvement using an established framework. An example is the Capability Maturity Model® Integration (CMMI) process improvement approach (SEI 2010, 1).
  • Concept maps - graphical representations of engineering thought processes - have been shown to be an effective method of transferring knowledge from senior engineering personnel to junior engineering personnel (Kramer 2007, 26-29; Kramer 2005). These maps may provide a mechanism for increasing knowledge of the systems engineering population of an organization.
  • An organization may also choose to develop organizational systems competencies by standardizing systems policies and procedures. An example from NASA is their NASA Systems Engineering Processes and Requirement (NASA 2007).
  • Some organizations use a web portal to store and organize applicable systems engineering knowledge and processes, which assists in developing organizational systems competency. An example is the Mission Assurance Portal for the Aerospace Corporation (Roberts et al. 2007, 10-13).
  • Another approach being considered in the community is the development of a rotating professor role, where the person would work at the company and then be at a university to strengthen the link between academia and industry.
  • Another approach is to alter organizational design to foster and mature a desired competency. For example, an organization that identifies competency in the area of reliability as critical to its SE success may develop a reliability group, which will help foster growth and improvement in reliability competencies.

Organizational Certification

Certification at the organizational level exists also, and can be a means for ensuring competency. ISO certification is one example (ISO 2010). Before taking this approach, the organization should verify that the capabilities required by the certification are indeed the systems capabilities it seeks. For more on determining appropriate organizational capabilities, see Deciding on Desired Systems Engineering Capabilities within Businesses and Enterprises.

Repositioning the Product Life Cycle

An organization may also choose to reposition its product life cycle philosophy to maintain system competency. For example, NASA has done this with its APPEL program (APPEL 2009).

Since the systems competencies of individuals are primarily developed through experiential learning, providing experiential learning opportunities is critical. Shortening the product life cycle is one way to ensure that individuals acquire the full range of desired competency sooner.

Maintaining Competency Plans

An organization that has developed an SE competency plan should consider how to maintain it. How, and how often, will the competency plan be re-examined and updated? The maintenance process should account for the ongoing evolution of global contexts, business strategies, and the SEBoK. The process for assessing competencies and taking action to improve them must be part of the normal operations of the organization and should occur periodically.

References

Works Cited

Academy of Program/Project & Engineering Leadership (APPEL). 2009. NASA's Systems Engineering Competencies. Washington, D.C.: U.S. National Aeronautics and Space Association. Accessed on September 15, 2011. Available at http://www.nasa.gov/offices/oce/appel/pm-development/pm_se_competency_framework.html.

Cowper, D., S. Bennison, R. Allen-Shalless, K. Barnwell, S. Brown, A. El Fatatry, J. Hooper, S. Hudson, L. Oliver, and A. Smith. 2005. Systems Engineering Core Competencies Framework. Folkestone, UK: International Council on Systems Engineering (INCOSE) UK Advisory Board (UKAB).

Davidz, H.L. and J. Martin. 2011. "Defining a Strategy for Development of Systems Capability in the Workforce". Systems Engineering. 14(2): 141-143.

Davidz, H.L. and M.W. Maier. 2007. "An Integrated Approach to Developing Systems Professionals." Paper presented at the 17th Annual International Council on Systems Engineering (INCOSE) International Symposium, 24-28 June 2007. San Diego, CA, USA.

Davidz, H.L., and D. Nightingale. 2008. "Enabling Systems Thinking to Accelerate the Development of Senior Systems Engineers." Systems Engineering. 11(1): 1-14.

Davidz, H.L. 2006. Enabling Systems Thinking to Accelerate the Development of Senior Systems Engineers. Dissertation. Massachusetts Institute of Technology (MIT), Cambridge, MA, USA.

Gardner, B. 2007. "A Corporate Approach to National Security Space Education." Crosslink, the Aerospace Corporation Magazine of Advances in Aerospace Technology. 8(1) (Spring 2007):10-5. Accessed April 23, 2013. Available at: http://aerospace.wpengine.netdna-cdn.com/wp-content/uploads/crosslink/V8N1.pdf.

GE. 2010. Edison Engineering Development Program (EEDP) in General Electric. Accessed on September 15, 2011. Available at http://www.gecareers.com/GECAREERS/jsp/us/studentOpportunities/leadershipPrograms/eng_program_guide.jsp.

INCOSE. 2010. Systems Engineering Competencies Framework 2010-0205. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2010-003.

INCOSE. 2011. "Systems Engineering Professional Certification." In International Council on Systems Engineering online. Accessed April 13, 2015. Available at: http://www.incose.org/certification/.

Kramer, M.J. 2007. "Can Concept Maps Bridge The Engineering Gap?" Crosslink, the Aerospace Corporation Magazine of Advances in Aerospace Technology. 8(1) (Spring 2007): 26-9. Accessed April 23, 2013. Available at: http://aerospace.wpengine.netdna-cdn.com/wp-content/uploads/crosslink/V8N1.pdf.

Kramer, M.J. 2005. Using Concept Maps for Knowledge Acquisition in Satellite Design: Translating 'Statement of Requirements on Orbit' to 'Design Requirements. Dissertation. Ft. Lauderdale, FL, USA: Graduate School of Computer and Information Sciences, Nova Southeastern University.

Lasfer, K. and A. Pyster. 2011. "The Growth of Systems Engineering Graduate Programs in the United States." Paper presented at Conference on Systems Engineering Research, 15-16 April 2011. Los Angeles, CA, USA.

Lockheed Martin. 2010. Training and Leadership Development Programs for College Applicants in Lockheed Martin Corporation. Bethesda, MD, USA. Accessed on August 30, 2012. Available at http://www.lockheedmartinjobs.com/leadership-development-program.asp.

NASA. 2010. Academy of Program/Project & engineering leadership (APPEL): Project life cycle support in U.S. National Aeronautics and Space Administration (NASA). Washington, DC, USA: U.S. National Air and Space Administration (NASA). Accessed on September 15, 2011. Available at http://www.nasa.gov/offices/oce/appel/performance/lifecycle/161.html.

NASA. 2007. NASA Procedural Requirements: NASA Systems Engineering Processes and Requirements. Washington, DC, USA: U.S. National Aeronautic and Space Administration (NASA). NPR 7123.1A.

Roberts, J., B. Simpson, and S. Guarro. 2007. "A Mission Assurance Toolbox." Crosslink, the Aerospace Corporation Magazine of Advances in Aerospace Technology. 8(2) (Fall 2007): 10-13.

SEI. 2007. Capability Maturity Model Integrated (CMMI) for Development, version 1.2, Measurement and Analysis Process Area. Pittsburgh, PA, USA: Software Engineering Institute (SEI)/Carnegie Mellon University (CMU).

Squires, A. 2011. Investigating the Relationship between Online Pedagogy and Student Perceived Learning of Systems Engineering Competencies. Dissertation. Stevens Institute of Technology, Hoboken, NJ, USA.

Primary References

Academy of Program/Project & Engineering Leadership (APPEL). 2009. NASA's Systems Engineering Competencies. Washington, DC, USA: U.S. National Aeronautics and Space Administration (NASA). Accessed on May 2, 2014. Available at http://appel.nasa.gov/career-resources/project-management-and-systems-engineering-competency-model/.

DAU. 2013. ENG Competency Model, 12 June 2013 version. in Defense Acquisition University (DAU)/U.S. Department of Defense Database Online. Accessed on September 23, 2014. Available at https://acc.dau.mil/CommunityBrowser.aspx?id=657526&lang=en-US

Davidz, H.L. and J. Martin. 2011. "Defining a Strategy for Development of Systems Capability in the Workforce". Systems Engineering. 14(2): 141-143.

INCOSE. 2010. Systems Engineering Competencies Framework 2010-0205. San Diego, CA, USA: International Council on Systems Engineering (INCOSE), INCOSE-TP-2010-003.

Additional References

None.


< Previous Article | Parent Article | Next Article >
SEBoK v. 2.1, released 31 October 2019