Structured Design by Yourdon and Constantine

Selected quotes from
Edward Yourdon and Larry L. Constantine.
Structured Design: Fundamentals of a Discipline of Computer Program and System Design
Prentice-Hall, 1979 (Facsimile edition 1986).
ISBN 0-13-854471-9 [See this book at]
compiled by Tom Verhoeff in July 2000. Bold face emphasis is mine.

Table of Contents

Section I CONCEPT 1
1 Toward Program Engineering 3
2 Basic Concepts of Structured Design 17
3 The Structure of Computer Programs 30
4 Structure and Procedure 48
5 Human Information Processing and Program Simplicity 67
6 Coupling 84
7 Cohesion 105
8 The Morphology of Simple Systems 145
9 Design Heuristics 165
10 Transform Analysis 187
11 Transaction Analysis 235
12 Alternative Design Strategies 245
13 Communication in Modular Systems 259
14 Packaging 276
15 Optimization of Modular Systems 290
Section V EXTENSIONS 305
16 A Typology of Systems Components 307
17 Recursive Structures 318
18 Homologous and Incremental Structures 331
19 Structure and Program Quality 353
20 Implementation of Modular Systems 375
21 The Management Milieu 395
A. Structure Charts: A Guide 409
B. Summary of Standard Graphics for Program Structure Charts 437
Glossary 445
Index 485


p. 1
For most of the computer systems ever developed, the structure was not methodically laid out in advance --- it just happened. The total collection of pieces and their interfaces with each other typically have not been planned systematically. Structured design, therefore, answers questions that have never been raised in many data processing organizations.

1. Toward Program Engineering

p. 7
[T]he systems designer accomplishes what some organizations call ``general systems design'': designing the major elements of the data base, the major componennts of the system, and the interfaces between them. Ideally, the final product of the systems designer is a document describing this structural design; ... we will introduce the notion of structure charts as a convenient means of documenting the structural design of a program or a system.

p. 8
Structured design is the art of designing the components of a system and the interrelationship between those components in the best possible way.

Structured design is the process of deciding which components interconnected in which way will solve some well-specified problem.

``Design'' means to plan or mark out the form and method of a solution.

p. 13
[O]ur overall technical objective function is made up of varying amounts of (emphasis on) efficiency, maintainability, modifiability, generality, flexibility, and utility. ... For them to be usable in an engineering sense by the designer, we will have to develop some objective measure or characterization of each and to show just how each is influenced by various isolated design decisions and overall design practices.

2. Basic Concepts of Structured Design

p. 17
[I]t is insufficient, in most cases, for the designer to consider a solution, a design. He should evaluate several alternate designs and choose the best --- best in the sense of maximizing such technical objectives as efficiency, reliability, and maintainability while satisfying such design constraints as memory size and response time.

p. 18
Successful design is based on a principle known since the days of Julius Caesar: Divide and conquer.

p. 20
Implementation, maintenance, and modification [costs] generally will be minimized when each piece of the system corresponds to exactly one small, well-defined piece of the problem, and each relationship between a system's pieces corresponds only to a relationship between pieces of the problem.

p. 21
[G]ood design is an exercise in partitioning and organizing the pieces of a system.

By partitioning we mean the division of the problem into smaller subproblems, so that each subproblem will eventually correspond to a piece of the system. The questions are: Where and how should we divide the problem? Which aspects of the problem belong in the same part of the system, and which aspects belong in different parts? Structured design answers these questions with two basic principles:


The other major aspect of structured design is organization of the system. That is, we must decide how to interrelate the parts of the system, and we must decide which parts belong where in relation to each other.

p. 22
The concept of a black box is a very powerful one, both in engineering and in software design. A black box is a system (or equivalently, a component) with known inputs, known outputs, and, generally, a known transform, but with unknown (or irrelevant) contents. The box is black --- we cannot see inside it.

Black boxes have numerous interesting properties, the most important of which is that we can use them. A true black box is a system which can be fully exploited [by knowing the interface and] without knowledge of what is inside it.

p. 24
How [black boxes] may be used [in design] is best understood in terms of a ``rule of black boxes,'' which may be thought of as a general-purpose design heuristic:
Whenever a function or capability is seen as being required during the design of a system, define it as a black box and make use of it in the system without concern for its structural or methodological realization.
Eventually, of course, each such invoked black box must in turn be designed, a process which may give rise to more black boxes and so on.

3. The Structure of Computer Programs

p. 30
A computer program is a system. We noted ... that structure --- components and interrelationships among components --- is an essential, often neglected property of computer programs. But just what are the components of computer programs and how are they related?

p. 31
At the most elementary (and safest) level, we observe that computer programs are, by definition, composed of statements. These statements are arranged (another way of saying structured) in a sequence.

p. 33
The term monolithic refers to any system or portion of a system which consists of pieces so highly interrelated that they behave like a single piece.


[I]t is all but impossible to simplify significantly the structure of an existing program or system through after-the-fact modularization. Once reduced to code, the structural complexity of a system is essentially fixed. It is, thus, clear, that simple structures must be designed that way from the beginning.

p. 37
A module is a lexically contiguous sequence of program statements, bounded by boundary elements, having an aggregate identifier. Another way of saying this is taht a module is a bounded, contiguous group of statements having a single name by which it can be referred to as a unit.

p. 41
It is important to keep in mind that the existence of a reference does not necessarly mean that a referent will be accessed.

p. 43
A number of design principles and strategies in this book will require us to study the flow of data through the program or system. Hence, we need a method of restating the problem itself (i.e. ``functional requirements'' or ``system specifications'') in a manner that emphasizes the data flow and de-emphasizes ... the procedural aspects of the problem. ...

The data-oriented technique that we will use is called data flow graph. The same model is also known as a data flow diagram [DFD], or a program graph or even a ``bubble chart.'' The elements of the data flow graph are called transforms and are represented graphically by small circles (or ``bubbles'' ...). ... [T]he transforms represent transformations of data ... from one form to another form. The data elements are represented by labeled arrows connecting one transform bubble to another.

4. Structure and Procedure

p. 48
Neophytes and veterans alike often find it difficult to comprehend the difference between procedure and structure in computer programs and systems. ... [N.B. The term `procedure' is not defined, and it neither appears in the glossary nor in the index.]

Every computer system has structure --- that is, it is made up of components that are interconnected. ... Regardless of how a system was developed, whether its structure was designed or determined by accident, we can document the modular structure.

p. 64
[A] flowchart is a model of the procedural flow of a program, whereas a structure chart is a time-independent model of the hierarchical relationships of modules within a program or system.


p. 65
Our approach to structured design is based on a formal, though not (as yet) mathematical, theory of the complexity of computer systems and programs. In our view, the cost of systems development is a function of problem and program complexity as measured in terms of human error. For a given problem, the human error production and, therefore, the cost of coding, debugging, maintenance, and modification are minimized when the problem is subdivided into the smallest functional units that can be treated independently.

5. Human Information Processing and Program Simplicity

p. 67
An understanding of the basic economic structure of the systems development process is essential in developing better, more efficient methods of systems production --- as well as better, more efficient systems.

p. 68
[T]esting and debugging account for most of the cost of systems development; the common estimate is that 50 percent of a data processing project is devoted to these activities. ... [T]he true cost of debugging is the cost of everything the programmer / analyst does in the development of a system beyond what would be necessary if he made no mistakes; ...

That most of the cost of systems development today is due to errors is not something to be denied, but rather an insight to be traded upon. Indeed, this is so vital that no theory of programming or programs, no technique or practice for programming or systems design, which does not give central recognition to the role of bugs and debugging, can be of much value in the practical amelioration of the pains in the field.

p. 69
[W]e find that we can increase the complexity of the problem from very trivial to trivial to not-quite-so-trivial with a correspondingly small increase in the number of errors --- but sooner or later, the errors begin to increase more rapidly. ...

The psychologist-mathematician George Miller, in a summary of a very large body of research, first described the human information processing limitations that give rise to this effect. It appears that people can metnally juggle, deal with, or keep track of only about seven objects, entities, or concepts at a time. In effect, the immediate recirculating memory needed for problem-solving with multiple elements has a capacity of about 7±2 entities. Above that number, errors in the process increase disproportionately.

p. 70
The Fundamental Theorem of Software Engineering ... [b]asically ... says that we can win if we divide any task into independent subtasks.

p. 72--73
[W]e ... emphasize the following:

A final word of caution is in order: Whenever we talk of improvements in design, or potential savings in costs, there will always be an implied qualification. We assume equal quality of implementation. It is possible to do a sufficiently poor job of implementing a plan or design so as to exceed any arbitrary limit in cost, time, or any measure of dysfunctionality of the solution. That is to say, there is always some programmer bad enough to screw up even the best design!

p. 81
The human limits in processing nested information are even sharper than in dealing with linear, sequential information. ... the human ``push-down stack'' can get overloaded at only two or three levels of nesting.

6. Coupling

p. 85
The key question is: How much of one module must be known in order to understand another module? The more that we must know of module B in order to understand module A, the more closely connected A is to B. The fact that we must know something about another module is a priori evidence of some degree of interconnection even if the form of the interconnection is not known.


The measure that we are seeking is known as coupling; it is a measure of the strength of interconnection. ... Obviously, what we are striving for is loosely coupled systems --- that is, systems in which one can study (or debug, or maintain) any one module without having to know very much about any other modules in the system.

Coupling as an abstract concept --- the degree of interdependence between modules --- may be operationalized as the probability that in coding, debugging, or modifying one module, a programmer will have to take into account something about another module.

p. 86
Four major aspects of computer systems can increase or decrease intermodular coupling. In order of estimated magnitude of their effect on coupling, these are

p. 99
The concept of coupling invites the development of a reciprocal concept: decoupling. Decoupling is any systematic method or technique by which modules can be made more independent.

7. Cohesion

p. 105
We already have seen that the choice of modules in a system is not arbitrary. The manner in which we physically divide a system into pieces (particularly in relation to the problem structure) can affect significantly the structural complexity of the resulting system, as well as the total number of intermodular references. Adapting the system's design to the problem structure (or ``application structure'') is an extremely important design philosophy; we generally find that problematically related processing elements translate into highly interconnected code. Even if this were not true, structures that tend to group together highly interrelated elements (from the viewpoint of the problem, once again) tend to be more effectively modular.

p. 106
[T]he most effectively modular system is the one for which the sum of functional relatedness between pairs of elements not in the same module is minimized; among other things, this tends to minimize the required number of intermodular connections and the amount of intermodular coupling.

``Intramodular functional relatedness'' is a clumsy term. What we are considering is the cohesion of each module in isolation --- how tightly bound or related its internal elements are to one another.


Clearly, cohesion and coupling are interrelated. The greater the cohesion of individual modules in the system, the lower the coupling between modules will be. ...

Both coupling and cohesion are powerful tools in the design of modular structures, but of the two, cohesion emerges from extenisve practice as more important.

8. The Morphology of Simple Systems

p. 155
One of the most obvious morphological features is depth --- that is, the number of levels in the hierarchy.

p. 157
Rather than dealing with such primitive measures as depth and width, we might consider the overall morphology of the system. Based on observations of a large number of systems during the past several years, we find that most well-designed systems have a shape of ... a mosque.

... Note that the mosque shape characteristically has a higher fan-out in the high-level modules, and higher fan-in in the bottom-level modules.

p. 162
Of the morphological factors relating to systems simplicity, the morphology known as transform-centered organization is the most important. The transform-centered model ... is highly factored, hence, quite deep for the number of atomic modules. Afferent and efferent branches are somewhat balanced; hence, the model is neither input-driven nor output-driven. ... In the fully factored form, this structure involves at each level a single transformation or set of alternative transformations performed by subordinate transform modules, whose inputs are supplied by the last subordinate afferent modules (on the afferent side), or whose outputs are fed to the next subordinate efferent modules (on the efferent side). ...

... It was derived empirically from a careful review of the morphology of systems, comparing systems that had proven to be cheap to implement, to maintain, and to modify with ones that had been expensive. ... The study ... produced what came to be called the transform-centered model. Most of the cheap systems had it; none of the costly systems did! Since then, of course, support for highly factored transform-centered design has become widespread, and is based on both experience and numerous studies.

9. Design Heuristics

p. 166
Module size: For most purposes ... modules much larger than one hundred statements are outside the optimal range with respect to the economy of error commission and correction.

p. 171
Fan-out, or span of control, is the number of immediate subordinates of a module... As with module size, very high or very low spans are possible indicators of poor design.

p. 172
Whenever possible, we wish to maximize fan-in durin the design process. Fan-in is the raison d'être of modularity: Each instance of multiple fan-in means that some duplicate code has been avoided.

p. 175
The scope of effect of a decision is the collection of all modules containing any processing that is conditional upon that decision. If even a tiny part of the processing in a module is influenced by the decision, then the entire module is included in the scope of effect.

p. 178
The scope of control of a module is the module itself and all of its subordinates. Scope of control is a purely structural parameter independent of the module's functions.

Now we can state a design heuristic that involves both scope of control and scope of effect:

For any given decision, the scope of effect should be a subset of the scope of control of the module in which the decision is located.
In other words, all of the modules that are affected, or influenced, by a decision should be subordinate ultimately to the module that makes the decision.

p. 185
It must be emphasized that this chapter has discussed heuristics, not religious rules. Heuristics such as module size, span of control, and scope of effect / scope of control can be extremely valuable if properly used, but actually can lead to poor design if interpreted too literally.

10. Transform Analysis

p. 187
[S]ystems whose morphology --- or overall shape --- [is] transform-centered tend to be associated with low development costs, low maintenance costs, and low modification costs. ... [S]uch low-cost systems tend to be highly factored; that is, the high-level modules make most of the decisions, and the low-level modules accomplish most of the detailed work.

Transform analysis, or transform-centered design, is a strategy for deriving initial structural designs that are quite good (with respect to modularity) and generally require only a modest restructuring to arrive at a final design.

p. 188
Overall, the purpose of the strategy is to identify the primary processing functions of the system, the high-level inputs to those functions, and the high-level outputs. ... [T]ransform analysis is an information flow model rather than a procedural model.

The transform analysis strategy consists of the following four major steps:

  1. restating the problem as a data flow graph

  2. identifying the afferent and efferent data elements

  3. first-level factoring

  4. factoring of afferent, efferent, and transform branches

p. 192
We ... define afferent data elements as follows:
Afferent data elements are those high-level elements of data that are furthest removed from physical input, yet still constitute inputs to the system.

Thus, afferent data elements are the highest level of abstraction to the term ``input to the system.''

p. 194
Beginning at the other end with the physical outputs, we try to identify the efferent data elements ... those furthest removed from the physical output which may still be regarded as outgoing. Such elements might be regarded as ``logical output data'' that have just been produced by the ``main processing'' part of the system ...

p. 195
Three distinct substrategies are used to factor the three types of subordinate modules (afferent, efferent, and transform) into lower-level subordinates. ... It is not necessary to completely factor one branch down to the lowest level of detail before working on another branch, but it is important to identify all of the immediate subordinates of any given module before turning to any other module.

p. 222
The transform-centered strategy still requires judgement and common sense on the part of the designer; it does not reduce design to a series of mechanical steps.

11. Transaction Analysis

p. 223
[T]ransform analysis will be the guiding influence on the designer for most systems. However, ... additional strategies can be used to supplement ... transform analysis.

Transaction analysis is suggested by data flow graphs ... where a transform splits an input data stream into several discrete output substreams.

p. 224
A great deal of the usefulness of transaction analysis depends on how we define transaction. In the most general sense,
A transaction is any element of data, control, signal, event, or change of state that causes, triggers, or initiates some action or sequence of actions.

p. 225
A transaction center of a system must be able to

12. Alternative Design Strategies

p. 245
Depending upon the nature of the application, transform-centered design or transform-centered design plus transaction-centered design usually will yield a design with highly cohesive, loosely coupled modules.

However, these two strategies are not the only way of deriving good designs in a systematic manner.

p. 246
Data-structure design method: One of the popular design strategies is based on the analysis of data structure, rather than data flow ...:

  1. Define structures for the data that is to be processed.

  2. Form a program structure based on the data structures.

  3. Define the task to be performed in terms of the elementary operations available, and allocate each of those operations to suitable components of the program structure.

Implicit in the data-structure approach is the fact that most ... applications deal with hierarchies of data --- e.g., fields within records within files. Thus, this approach develops a hierarchy of modules that, in some sense, is a mirror image of the hierarchy of data associated with the problem.

p. 251
Another interesting modular design approach is described by Parnas as a set of rules for the decomposition of systems into subsystems.

p. 252
The Parnas decomposition criteria may be paraphrased as follows:

  1. Decomposition is not to be based on flowcharts or procedures.

  2. Each design interface is to contain (require) as little information as possible to correctly specify it.

  3. Each design unit is to ``hide'' an assumption about the solution that is likely to change.

  4. A design unit is to be specified to other design units (or to the programmers of other design units) with neither too much nor too little detail.

13. Communication in Modular Systems

p. 259
We do not intend to portray ... pathological connections as an evil that must be voided at all cost, but we will suggest some steps that the designer should go through in order to justify anything other than normal connections.

p. 260
[A] pathological connection is a reference [to] an identifier or any entity inside a module [originating from outside the module].

p. 263
In general, the use of pathological connections reduces the ability of the programmer to treat modules as black boxes.

p. 270
Much of the reluctance to use normal communication between modules may stem from the coding required for the interfaces. Programmers complain that it takes a great deal of extra work to code the passing of parameters, and to check fatal error flags and all the other encumbrances of normal communication.


Since the highly modular approach of structured design has been proved to reduce programming, it is extremely unlikely that a central feature --- namely normal subroutine calling --- should increase programming over all.

p. 271
Perhaps the strongest, certainly the most commonly heard, argument in favor of pathological communication is that of efficiency. Still, the designer should ask himself whether the cost of normal communication is truly unbearable compared to the total cost of pathological communication. If so, and if efficiency is an important issue in the system, then pathological communication may well be justified.

We should point out, however, that the cost of normal communication is not so terribly excessive in most high-level programming languages.

p. 273
Perhaps the strongest argument against pathological connections is that they make future modification of the system more difficult.


If the designer feels that every minute of coding time is precious, that nary a microsecond of CPU time can be wasted, and that future modifications to the system are unlikely, so be it! We concerned only with the fact that many pathological communications are designed unconsciously or casually --- or they result from a long-standing prejudice that all normal communications are bad because they require too much CPU time.

14. Packaging

p. 276
[W]e must make a system fit into the available physical memory ..., and we must implement the various input-output processes of the system on actual devices. Both of these steps are concerned with the physical realization of a modular system on an actual computer.

The term packing refers to the assignment of the modules of a total system into sections handled as distinct physical units for execution on a machine. ... For some systems, programs are the load units; in others, we see the terms ``overlays,'' ``load module,'' ``job step,'' and so forth.

p. 289
[W]e cannot overemphasize that packaging should be done as a last step in the structural design --- not as the first step!

15. Optimization of Modular Systems

p. 290
Optimization is something that should be considered after the system has been designed, and should not be an influence on the design process itself. It is demonstrably cheaper to develop a simple working system, and speed it up, than to design a fast system and then try to make it work. The savings possible by delaying optimization are even greater when the design is highly factored and uncoupled. ... Many systems do have to be optimized to reduce their use of CPU time, memory, use of peripheral devices, or other limited resources.

p. 291
An interesting paradox comes to light in most discussions about the optimization of modular systems: Many programmer / analysts are convinced that the techniques discussed in this book contribute significantly to the inefficiency of their systems, yet they have nod idea how much. A few crude experiments suggest that a highly modular system usually requires 5--10 percent more memory and CPU time than do systems implemented in the traditional fashion; on the other hand, there have been occasions when modular systems have been considerably more efficient than classical systems.


[O]ptimization should be discussed from a rational point of view: Not every microsecond of computer time has to be optimized! The following [15.1.1 -- 15.1.6] philosophies are important to keep in mind ...

p. 291--292
15.1.1 The efficiency of a system depends on the competence of the designer

There is not much point in talking about efficient systems or optimization if the system is being designed and/or programmed by people of only mediocre talent. Of course, this is a rather sensitive issue. One's ego makes it difficult to deal with one's own mediocrity, and one's manners make it difficult to accuse colleagues of mediocrity. Nevertheless, it is a fact that shoud be faced squarely: A surprisingly large number of analyst / designers design stupid systems, and an even larger number of programmers write horribly stupid code.

These are blunt words, to be sure. However, a classic study by Sackman et al. pointed out that, among experienced programmers, we can find a 25:1 difference in design time and debugging time. Equally disturbing is the fact that the resulting code can vary in speed and size by a factor ten. The most depressing fact of all was that Sackman's study indicated that there was no correlation between programming performance and scores on programming aptitude tests. H. L. Mencken observed that nobody ever went broke underestimating the intelligence of the American public. After visiting programming organizations around the country, the authors have concluded, somewhat sadly, that a similar statement could be made about programmers and designers.

Our point is simple: There is no substitute for competence. If you want a system designed and implemented efficiently, make sure it is done by people who know what they are doing --- which, by the way, has very little to do with the number of years they have been working in the computer field!

p. 292
15.1.2 In many cases, the simple way is the efficient way

p. 293
15.1.3 Only a small part of a typical system has any impact on overall efficiency

Knuth's classic study indicates that approximately 5 percent of the code in a typical program consumes approximately 50 percent of its execution time. [cf. Amdahl's Law]

15.1.4 Simple modular systems can be optimized easily

p. 294
15.1.5 Overemphasis on optimization tends to detract from other design goals

15.1.6 Optimization is irrelevant if the program doesn't work

p. 295--296
If our system needs to be optimized, there are two ways of approaching the problem: optimizing the code within modules, or changing the overall structure of the system to improve performance.

The specific techniques for optimizing code within a module are largely outside the scope of this book. We know that optimizing compilers are becoming increasingly significant. ...

However, we can suggest an organized plan of attack ... for optimizing the code within modules of a large system. ...:

  1. Determine the execution time for each module or load unit.

  2. Examine each module to estimate potential improvement.

  3. Estimate cost involved in making the improvement.

  4. Establish priorities for making improvements to modules.

  5. Optimize the modules with the highest priority.

p. 296--297
In a small number of cases, optimization within module boundaries may not be sufficient to achieve the desired level of efficiency. It then may be necessary to modify the structure of the system. Before indulging in this kind of optimization, it is important that the designer identify the source of the inefficiency [i.e. gather statistics].


Fortunately there is only a small set of structural modifications that make noticeable improvements in execution speed (and perhaps, in memory).

p. 301
A great deal of the overhead in intermodule transitions is involved in the passing of data: consequently, some of the most popular optimization techniques involve minimizing the passing of such data.

p. 303
After each [real or imaginary] manipulation of the structure, a careful analysis should be performed to see what actual, demonstrable gains in efficiency and what probable sacrifices in modularity have been made.

16. A Typology of Systems Components

p. 307
When modularity is first introduced, one frequently hears, ``Oh, you mean using subroutines.'' It is true that the subroutine is the most ubiquitous type of module within computer systems; fortunately, it is not the only type!

p. 308
A three-dimensional characterization of activation and control has proven useful. This involves three (possibly interdependent) factors: time, mechanism of activation, and pattern of control flow.

17. Recursive Structures

p. 318
Nothing in this book so far ... precludes structures ..., in which the graph for the structure chart has cycles in it. This type of structure is known as recursive. ... Although it is traditional to discuss recursion from a procedural or algorithmic viewpoint ..., it is obvious that recursion is also a structural phenomenon and, therefore, may be explored in terms of structural issues, including structural design.

p. 319
The necessary (but, alas, not always sufficient) conditions for termination of a recursive module are two. First, at least on call in the cycle of subordinates must be conditional ... Second, the exact values of input arguments may not repeat within recursive calls.

p. 324--325
It is always possible to transform a recursive process into a nonrecursive or iterative process that uses only loops rather than recursive calls. In most cases, this amounts to simulating the recursion by having the procedure do its own explicit stacking and unstacking of the data.

18. Homologous and Incremental Structures

p. 332
The questions of which module is in charge and which is the subordinate can be avoided altogether by employing structures that are not hierarchical.

Homologous, or non-hierarchical, systems arise from any control relationship that does not define a hierarchy of control responsibility.

p. 333
In the abstract, a coroutine is a module whose point of activation is always the next sequential statement following the last point at which the module deactivated itself by activating another coroutine.

19. Structure and Program Quality

p. 353
Throughout this book we have emphasized that we are seeking designs that minimum-cost: inexpensive to implement, inexpensive to test, and inexpensive to maintain and modify.

However, there are additional qualities that usually are associated with good systems; among the common ones are generality, flexibility, and reliability.

p. 354
We could informally define a general-purpose system ... as one that is widely used or usable, solves a broad case of a class problem [sic], is readily adaptable to many variations, and will function in many different environments.


The most persistent myth of generality and generalization is that most general systems --- by religious principle --- cost more to design and more to build.

p. 362
Reliable operation is a major design goal in most systems development processes. In hard-systems engineering, reliability often is so important that formal, systematic strategies are used to increase reliability, including so-called statistical reliability theory.

The techniques for developing reliable computer hardware --- redundancy, self-checking data and computations, majority voting logic, duplicated systems, fall-back and switchover, and the like --- are such that hardware can now be made arbitrary reliable. Similar concepts in software have been almost totally absent until recently ...

p. 369
The probablistic behavior of software failures arises not from an intrinsic decay process of the components, but fomr the data that the software is called upon to process.


No software system of any realistic size is ever completely ... error-free.

p. 371--372
The difference between hardware redundancy and software redundancy can be appreciated by considering a duplicated hardware system. Performance of the same operation by two machines ... provides a dependable method of increasing reliability. There is a very low probability that both systems will fail simultaneously and in identical fashion. Thus, agreement in results generally can be taken as an absence of failure. ...

Consider what happens when we execute two copies of the same program (or the same program twice). If the results disagree, it is indicative of a hardware failure, not a software failure. ... Thus, it is clear that software redundancy must be achieved through non-identical components, implying a comparatively larger development cost.

p. 373
In prototype, a fault-handling process has four elements. The existence of a fault must be detected by some process for a program to be cognizant of it. Immediate action must be taken to process, bypass, or otherwise deal with the fault. Finally, provision may be made for ultimate correction of the cause of the fault, or recovery from its consequences.

It is an almost universal rule of thumb that faults should be detected as early as possible --- that is, close to the source at some interface.


The design philosophy that yields greater reliability is the one that requires every function to protect itself, validating its own data.

p. 374
Systems programs and their components --- especially operating systems and real-time / time-sharing systems --- must never assume correctness of data. The same is probably valid for all parts of vital systems. The rub is that the very system requiring maximum reliability often is the one with the most stringent speed requirement.


The preferred way to build a general-purpose system is not to build one computer program that will do all things for all people. Instead, what one should do is build a large number of small, single-purpose modules that are flexible and that have extremely clean interfaces. The generality comes from the almost infinite number of combinations of such modules ...

20. Implementation of Modular Systems

p. 375
Most of the emphasis throughout this book has been on the design of highly modular systems. We have made passing references to implementation, testing, debugging, installation, and other such terms, but we have given no details on the methods and strategies to be followed once the design work is done.

p. 377--378
The phased approach to implementation could be described in the follwing (slightly tongue-in-cheek) manner:

  1. Design, code, and test each module by itself (this is commonly known as unit test).

  2. Throw all the modules into a large bag.

  3. Shake the bag very hard (this is commonly known as systems integration).

  4. Cross your fingers and hope that it all works (this is commonly known as field test).


In contrast, ... an incremental approach can be paraphrased in the following manner:

  1. Design, code and test one module by itself.

  2. Add another module.

  3. Test and debug the combination.

  4. Repeat steps 2 and 3.

p. 378
For many years, bottom-up testing has been practiced ... [I]t was simply the best-known series of steps in which testing was done:

  1. Unit testing (sometimes known as module testing, single-thread testing, or program testing)

  2. Subsystem testing (also known as run testing, or multi-thread testing)

  3. Systems testing (sometimes known as volume testing)

  4. Acceptance testing (also known as field testing, or user testing)

p. 379
In most cases, bottom-up development requires the presence of so-called drivers --- also known as ``test harness,'' ``test monitors,' and ``test drivers,'' and various other terms. A test driver has to ``exercise'' the module under test, in what is basically a primitive simulation of what the superordinate module would if it were available.


The concept of a dummy module, or stub, is an important aspect of top-down implementation.

21. The Management Milieu

p. 396
The basic questions to be asked in this chapter are: Should management understand anything about the concepts of structural design? How does structural design help the manager accomplish his job more effectively? ... [I]t will be necessary to discard the myth that the technical and the managerial aspects of systems development are separable. In reality, they are not.

p. 397
[U]nder-budgeting of design increases total system's cost.

p. 400
Conway's Law:
The structure of a system reflects the structure of the organization that built it.
Conway's Law has been stated even more strongly:
The structure of any system designed by an organization is isomorphic to the structure of the organization.

p. 402
21.2 Management benefits of prior structural design

[W]e ... suggest, for the sake of management, that a complete (or nearly complete) structural design be accomplished first, using all of the principles of coupling, cohesion, transform-centered design, and others previously discussed. When this has been accomplished, we suggest that coding and testing be accomplished in an incremental fashion...

21.2.1 Reliable cost estimating

p. 403
21.2.2 Improved scheduling, improved planning

21.2.3 Parallel development of acceptance and validation criteria

p. 404
21.2.4 Better project monitoring and control

p. 406
It has been our experience that careful attention to the principles of structured design makes it far easier for the manager to develop accurate schedules and budgets --- for precisely the same reason that structured design enables the technician to develop, test, and maintain his system more easily.


p. 446
black box: a system (or, equivalently, a component) with known inputs, known outputs, and generally a known transform, but with unknown (or irrelevant) contents.

p. 447
cohesion: the degree of functional relatedness of processing elements within a single module.

p. 449
conroutine: a nonincremental module activated by a bifurcated transfer. Also known as a task.

coroutine: a module whose point of activation is always the next sequential statement following the last point at which the module deactivated itself by activating another coroutine.

coupling: a measure of the strength of interconnection between one module and another.

p. 451
driver: a primitive simulation of a superordinate module, usd in the bottom-up testing of a subordinate module.

p. 452
factoring: a process of decomposing a system into a hierarchy of modules.

p. 454
interface: the point in a module or segment elsewhere referenced by an identifier at which control or data is received or transmitted. [???]

p. 462
stub: a primitive implementation of a subordinate module; normally used in the top-down testing of a superordinate module.

Page contents by Tom Verhoeff
Feedback on this page is welcome.