Exam 2IP25 (Software Engineering) 25 June 2012 Grading scheme * Each question is worth at most 3 points, for a total of 45 points The criteria for scoring are: 3 good enough (not necessarily perfect) 2 sufficient, but missing something essential 1 insufficient, but containing something significant 0 lacks significant contributions * A global deduction or bonus is possible in case this cannot be attributed to questions in isolation (e.g. English writing) * Original final grade algorithm: divide total score by 4.5 and round to nearest integer Adjusted final grade algorithm: drop the question with the least score, divide total score for remaining questions by 4.2, and round to nearest integer Below are some hints and key ingredients for answering the questions. These are not fully worked out answers. Without further explanation, just mentioning some names or concepts does not suffice. Furthermore, the maturity of the formulations are also weighed in the score. E.g. usage of appropriate and commonly-accepted terminology matters. At the exam you need to show that you studied and understood the relevant material, and that you can decide how to deal with these open questions. There is no such thing as /the/ (one and only) correct answer. It is not necessary to bring in all aspects hinted at below for obtaining a full score. Often, there is a clear distinction between aspects that are of primary importance and those that are secondary. In many cases, there is some overlap between the items mentioned. Vague terminology is not acceptable. E.g. "development" is not an activity or phase in a software life-cycle/process; it is a term that refers to a large collection of activities. Vague "explanations" also not: "The goal of Configuration Management is to manage the configuration items (documents and code)". This needs more precision on what "to manage" involves. 1. SE essentials (Ch.1, p.6-8) Intangible product Large, complex product External customer and users, different from developers Externally determined quality requirements Part of a larger system (hardware, an organization) Developed and maintained by a group of persons (rather than one or two) Limited budget and limited calendar time to deliver In a commercial setting (with competition) Continually evolving, long-lived 2. Prototyping (Ch.3, p.56, ...) Requirements: to help elicit, validate, refine Architecture: to assist in making architectural decisions Design: to assist in making (lower-level) design decisions Coding (agile): prototype = (intermediate) deliverable Also cf. spiral model; throw-away versus evolutionary prototypes Mentioning only one phase and one purpose gets at most 1 point For 2 points, at least two phases or two purposes are needed For 3 points, multiple phases and multiple purpose per phase are needed 3. Characteristics of agile, XP (Ch.3, p.55, 66-68) Agile manifesto (four principles) XP: pair programming, customer involvement, code always works, etc. 4. Configuration management (Ch.4; CM slides, p.4) Goals: CI identification and storage Control the release and change of CIs Record and report the status of CIs and change requests Verify completeness and consistency of CIs Guarantee availability of CIs How achieved: CM planning and procedures Configuration Management System (tools) Change control board 5. Quality categories in ISO 9126 (Ch.6, p.125...) Functionality Reliability Efficiency Usability Maintainability Portability 5 or 6 of/near these (with >= 5 examples) are enough for 3 points 4 (with >= 3 examples) gets 2 points 2 or 3 (with >= 2 examples) get 1 point When giving the characteristics of quality-in-use according to ISO 9126, then at most 2 points are given (when done clearly and correctly): effectiveness, productivity, safety, and satisfaction. 6. Compare two algorithmic models for cost estimation (Ch.7) Two explicit indications (preferably names) of such models It must be made clear what type of inputs and outputs these models involve. The nature of the relationship (e.g. Effort = a + b * Size ^ c) 7. Verification (Ch.3; Ch.9, p.253; Ch.11, p.317; Ch.12, p.393; Ch.13, p.410) All steps/phases in engineering work (should) involve verification (checking that the work done fulfills pre-set expectations). Note however that doing verification for a testing phase needs a convincing explanation (since testing is itself a verification activity). Req. eng., Modeling, Arch. design, Detailed design and coding; also adherence to defined process standards needs verification. Kinds of verification to distinguish: * reviews (non-execution-based) versus testing (execution-based) * product- versus process-related verification * less significant: validation ("do we build the right system?") versus verification ("do we build the system right?") 8. Requirements management (Ch. 9, p.247...) The narrower sense, according to the book (p.248), breaks down into * req. identification * req. change management (change control; cf. config. mgt.) * req. tracing A broader interpretation could be * Elicitation (incl. domain engineering/analysis, prototyping) * Specification (documentation, and management as above) * Verification * Negotiation 9. Use case/test case (Ch.10, p.286; Ch.13; RE slides) Differences: Use cases are generic, solely focus on user-system interaction (not on detailed i/o relationship and internal operation), concern a primary/comprehensive user task (typically covering multiple atomic requirements), concern the system as a whole, can be understood by non-specialists, do not require an executable product to be used, are defined in the requirements phase Test cases are specific/concrete, check operational details (e.g. correctness of i/o relationship), focus on requirements in isolation; can concern modules within a larger system; their purpose is to detect defects; require (part of) a product to be run; are defined (in detail) in the design/coding phase Common: both concern observable functionality (though testing could address other qualities as well, esp. efficiency); both serve to define quality expectations; both involve a sequence of steps; both can serve to drive design (cf. TDD), do not require an executable product to be useful (viz. for design) Difference (in addition to above): test cases can be derived from use cases, but not the other way round 10. UML diagrams (Ch.10, pp.274...) Use Case in Req. Eng. Class Diagram for domain/conceptual model in Req. Eng. Component, Deployment, Package Diagram in Architecture Class, Sequence, State Machine Diagram in Arch. and/or Detailed Design Etc., etc. N.B. There are 13 diagram types in UML 2 (see book; and 14 in UML 2.4) Just mentioning Use Case and Class Diagrams is not sufficient (<= 1 point) Just mentioning Req. Eng. and "development" or "implementation" is not enough >= 5 diagram types applied in >= 3 different activities gives 3 points 11. Architectural viewpoints (Ch.11, pp.298..., esp. 302) E.g. Kruchten 4+1: Logical, Implementation, Process, Deployment, plus (architecturally relevant) Use Cases Alternatives: module viewpoints, component-and-connector viewpoints, security viewpoint, etc. 3 diverse or 4 (possibly closely related) viewpoints receive 2 points; 1 or 2 (well-described) viewpoints receive 1 point 12. Pipes-and-filters architectural style (Ch.11, p.313) A description needs to be organized according to the template presented in the book for describing architectural styles: problem, context, solution (with system model, components, connectors, control structure), optionally variants and examples 13. Functional decomposition (Ch.12, pp.353...) Design technique (classical), focusing on functionality Form of divide & conquer: decompose function into sub-functions, each addressing a subproblem, and integrate/compose the sub-functions into higher-level functions Hierarchic (recursive) Disadvantages: * Modern software does not offer one top-level function to decompose * Need to make important (architectural) decisions based on functionality, rather than other qualities * Forces top-level decisions before enough details are known * Resulting architecture is not stable, making it harder to evolve the software without major changes to architecture * Top-down decomposition: danger is lack of cohesion, because similar subproblems may recur in different decomposition branches * Bottom-up composition: danger is high coupling Better basis: data (as opposed to functionality/operations), or processes; in OO this results into classes 14. TDD (Ch.13, pp.421..., esp. p.422) Design test cases first (these fail); operational proxy for requirements Incremental: design a(nother) function/method, test it, repeat Automated Design of test cases forces one to study the requirements, makes the developer aware of issues that need to be addressed in the design of the code There is immediate feedback when you are done coding a function; without TDD you need to interrupt the process by spec./design of test cases Can always do regression testing, when making (later) changes 15. Types of maintenance activities (Ch.1, p.15; Ch.14, p.468) Corrective: correct defects Adaptive: adapt to changed environment Perfective: incorporate new/changed requirements Preventive: improve maintainability w/o affecting other qualities