Exercise Solutions (Unrestricted)
Both kinds of engineering began with ad-hoc practices that, as things scaled up and as people's willingness to pay went down, eventually had to become standardized, and with materials that eventually had to become standardized and componentized for the same reasons.
A good bridge, building or DVD player will have arisen from a well-understood process that involved requirements elicitation, analysis of something that had prior existence, and technical design. Many of us continue to believe that similar elements are required in a software process.
However, the "soft" of software also makes it rather different. We can, indeed, seemingly, must use an iterative or spiral development process rather than linear or "waterfall". We can, and should, wherever possible use incremental development and delivery rather than "big bang" delivery. And we must build today something that can be changed tomorrow without mishap.
One very significant difference is our inability to use our senses on software. It is much more difficult to understand what we must write, or what we have written; we have misconceptions because forming a conception is so difficult. In the machine there is a pattern of billions of bits that we must get a handle on, yet we can't see them, hear them, feel them or smell them.
Many of the systems in other engineering disciplines are continuous: a small change has a proportionate result. Changing one bit of a software system, however, can have an enormous and difficult to predict result. Software systems are non-continuous systems.
Perhaps it is that almost unique characteristic of software -- that it can't be sensed -- that is responsible for another difference: unwillingness to pay what is required for a successful product. Now, of course, skimping and underfunding will occur in many areas of engineering, but it seems to be particularly bad in software engineering. Just because almost anyone can write a program, software project management seem to think that software should be cheap and that the only reason it isn't is that, so far, they've failed to find their silver bullet that will make it so.
Software engineering is a less mature discipline than most other engineering. We have lots of amateurs, rather than professionals, producing software systems. Estimating "blue books" are not available. Standards for technology and documentation are still evolving rapidly, and although we might expect the documentation standards to stabilize we can't expect it of the technology (and, quite possibly, the rapidly changing technology means that the documentation standards will never be able to expect to settle down).
In software, the classic interface metaphor is the desktop. The book mentioned sound recording and mixing software than mimics a mixing desk and an effects rack. There have been database systems that mimic record cards. The spreadsheet mimics the accountants squared paper, pencil, eraser and calculator. Would the drag/drop mechanism count as a metaphor? I think so.
Outside of software chemical plant and transport control system have layouts that can easily be related (and respect the topology of) the routes of the pipes, the tracks, the roads, etc.; and, for example, valves controls look like valves and are presented in position, on the representation of the pipe in question.
In an aircraft the attitude and turn-and-bank indicators all present fairly direct representations of the aircraft, the horizon, etc.
[Screen widgets like thermometers, spinners, dials are, perhaps, meta-metaphors, metaphors for abstractions or other metaphors. A real speedometer, in a car say, is an abstraction of the notion of speed (and speed is a pretty abstract notion -- most people turn out to have a better grasp of calculus that they think they do [and I love the observation that President Kennedy is said to have used a third-order derivative once: the rate of increase of inflation is slowing down]) and interface designers find it interesting to ponder why a needle and dial speedometer turned out to be vastly preferable to a digital speedometer in a car.]
A miniature doesn't necessarily use an alternative medium as a model typically tends to do. A bonsai tree is a tree; it's just unusually small. A miniature train is simply a smaller train. A model train on the other hand isn't really a train. A model makes the communication, the exploration of something more convenient and cost-effective by representing just those things that are considered relevant [remember abstraction is ignoring as much as possibly you can] and representing them in a cheaper, or more tractable, etc., medium than the original. A toddler's wooden train is a very abstract representation of a train, but it's fun, cheap and, to the toddler's imagination is an entirely adequate representation of a train. A program listing or a UML diagram isn't just a smaller program or a smaller subject matter; instead they are more convenient abstractions and representations.
A model contains no inaccuracies or falsehoods (hopefully). A prototype [and because of the political and social problems with that word in software circles, the book recommended that you use the term "mock-up" instead] almost certainly has parts that are necessary for it to function, to fulfil its purpose, but parts that should have no place in a solution or parts that represent nothing in the subject or solution domains, or that aren't there directly to help communication, reasoning, etc.
I thought of a wind-tunnel model and a crash-test dummy. A vehicle model suitable for aerodynamic tests in a wind tunnel wouldn't represent the engine very faithfully at all, nor would it represent the interior design; it would, however, faithfully (very accurately in fact) represent the external shape. A crash-test dummy wouldn't have correspondence with the coloring, the chemistry or the electro-neural systems of a real animal, but it would faithfully represent shape, mobility and stiffness, mass and density distribution.
Some older approaches considered that not only did subject matter objects (or entities as the book terms them) have properties or attributes -- which is fairly incontrovertible -- but also that those attributes guided developers to the data that the counterpart solution objects would hold (instance variables (data members)) -- which can't really be correct. This is because in today's approaches, the generally held belief is that solution object instance data will be private and should not directly predicted by anything the analysis discovers; rather than instance variables, the analysis entities' attributes will guide us to object instance query services -- methods that make a return and that don't change the state of the responding object instance. If a query service happens to be implemented by reading a stored instance variable, well that's coincidence in version v of the class, and might well be implemented differently in version v+1 of the class.
Developers might assume that analysis entities give us guidance as to what the classes will be like, rather than guiding us as to what the instances ought to be like. Class choices are heavily influenced by purely technical issues. We should establish the nature of the needed object instances and then let that, plus the technical issues guide us as to the classes required.
Developers might limit themselves to only reasoning with, and only documenting and specifying with, structural diagrams ("class" diagrams). A system development in which no instance diagrams (e.g. sequence diagrams, or object interaction diagrams) were done would be unlikely to be a successful system development.
Designers with a superficial knowledge of object technology might be led to design and specify the concrete classes or the abstract base classes (abstract superclasses) before designing the types (the interfaces or the purely abstract base classes). In outside-in design, as the book terms the approach it believes to be the correct approach, we care about the types the objects manifest before we care about designing the objects' internal mechanisms.
Programmers might be tempted to use concrete classes as types (types of variables, types of parameter, types of returns) which isn't really going to get the best from object-orientation. Types should tend towards being more abstract than concrete classes.
Form a judgment as to whether those classes were designed from the inside out -- where version 1’s instance variables drove the design of the interface -- or designed from the outside in -- where the interface required by the clients drove the methods and variables of version 1’s implementation.
I'm thinking of a couple of classes I have seen, like an Invoice class and a Student class. The Invoice class stored number, date, net amount, tax and gross amount; and the interface had a "get" and a "set" method (member function) for each of these. This was OK (although my students are divided fifty-fifty on whether or not the names of the methods should actually include the prefixes "get" and "set"). The services that an invoice should deliver are indeed remembering and divulging its number, remembering and divulging its amounts, and remembering and divulging its date. And of course it's quite reasonable to implement those abilities by storing those items of information. In other words the invoice object was, righteously, designed from the outside in.
The Student class had many more instance variables, each one of which was "given out" via "get"s and "set"s, for example, getMySqlMigrationDate() and setMySqlMigrationDate(). Given that there was no meaning whatsoever for the second of these, I concluded that the design was inside-out and thus poor. The version 1 date had been the only driver of the object's interface design -- not good.
Although one can be suspicious of classes with large numbers of get/set method pairs, one cannot say whether they are poorly designed or not without knowing more.
The key test of righteousness is whether a technical change to the stored data -- the instance variables (data members) -- wouldn't necessarily imply a change to the interface. If we decided, for some bizarre reason, to store the day, month and year of the date in separate instance variables in the next version of the Invoice class, we would probably still keep the original two "date" methods in the interface. A developer who automatically gets rid of the original two methods and adds six new methods like getDay(), setDay(), getMonth(), etc. is really pining for Pascal or C, and is just producing thinly-disguised, overly-complicated data structures.
Then there are database systems that have decided to present their contents via objects (whether home-grown or via something like EJB). Here, inevitably, one will find a much greater degree of match between the interface and the stored data.
The examples that I had in mind involve Eli Whitney, Sir Joseph Whitworth and William Sellers. Eli Whitney had claimed to manufacture standardized musket parts, as as unique selling point for one of the very first government arms contracts, but had probably faked the whole thing thus beginning a grand tradition of the industrial-military complex that continues to this day. Joseph Whitworth described a method for standardizing screw threads in an 1841 paper titled "A uniform system of screw-threads." An American, William Sellers, was convinced that standardized screw threads were a good thing but that Whitworth's standard was technically inferior to his own, and convinced American manufacturing to start using his modification of the Englishman's standard.
Although questioned at the time, standards quickly became important to all industry. Quickfit glassware, track gauges and plumbing fittings are just some of the countless examples of standards we rely on today. Interestingly, in the 1980s, the computer industry was not convinced that standards were a good thing -- and open standards were viewed even more suspiciously. Standardization became accepted however.
A standard should be widely accepted. As long as it's widely accepted, it can survive as a de facto standard, although it typically does help if eventually it becomes a de jure standard. A standard must be easily adhered to or it won't be adopted and will be a standard in name only. A standard should be as simple as possible (loose coupling), restrict itself to describing what others will rely on and should avoid describing any implementation detail (information hiding).
With standards we can encourage componentization, increase reuse, increase plugability, decrease costs, ease maintenance and ease division of effort.
The possible answers are all the question. You may have had a spiral process that worked fine, and all the while the management were thinking it was a successful use of the waterfall model. Or they might have insight and realized that they were using a very coarse-grained linear control model while the development was following an unmanaged iterative model. You may have worked in a project were someone had become enthused with the spiral model and insisted on seven managed macro-spirals, and perhaps on managing the micro-spirals as well; and probably found that that degree of control strangled the project.
I suspect that functionally-driven development processes for object-oriented software systems can produce fine products; but that those products don't make past versions 3 or 4. I suspect that a successful and long-lived object-oriented software system will inevitably have a good, strong object architecture.