??? 04/12/12 17:01 Read: times |
#187115 - We are careful Responding to: ???'s previous message |
Richard Erlacher said:
How you record your efforts and what you call it is up to you. Nevertheless, and we're discussing a major piece of work (LINUX) here, involving multiple participants, and, in order to get everyone onto the same page, people have to have it written on the same page. So have you visited http://tldp.org/ Maybe you could be a bit specific. What specific things are not documented for Linux but are documented for Windows or for OS-X? Well, perhaps you can explain how you can "design" something before you define it. I admit that if the effort is performed by one or a very few participants, you can "get by" with meetings and iron our the wrinkles that result from such practice by having further discussion, but once you have multiple participants, in multiple locations, at multiple times, all attempting to produce a single work-product, things have to be formalized in writing. Yes, but projects run like that with an attempt to do what you suggest (all documentation done before coding start and zero code redesign during project becasue all design issues already documented down to the level required for a developer) have already shown that they either fail or results in huge costs and delays. What people do is drill down, subdividing the problem into smaller issues and have the different geographical resources work on the internal specifics of their piece. And this is continued until the pieces have reached a size where they can be properly managed. But the individual pieces are still often not trivial, i.e. will require individual research by the individual person or team that is responsible. So the person or team will have to interleave coding and documentation as they work on a solution that can fulfill the basic requirements. The starting documentation is what problem to solve - not how it should be solved. I have difficulty following your rationale. How can you code something that hasn't been completely defined, the requirements for which haven't been analyzed and decomposed in to their elements, and for which clear criteria haven't been devised? If I do know the number of objects to sort, the available computing resources etc, I do not need a document that explicitly documents a quick-sort or a heap-sort algorithm. It's enough that the documentation claims that up to 10 million entries of each about 1k size have to be sorted with the result delivered within x milliseconds real-time, y milliseconds CPU time, z megabyte RAM and w disk accesses. The goal is to found the bounding boxes of the different problems - not to micro-manage everything to a crawl. How can you rigorously demonstrate that your end-product "works" if you haven't devised a test plan that not only demonstrates that all the requirements are met, but how far they can be pushed before failure occurs? You don't need a billion pages of blueprints for a car to set up a test plan for it. You do not know how the ignition system works to check if the RPM limiter do kick in at the red marker, or to check what happens if the motor is run at max RPM for extended times. And we already know that the requirements for a sports car is that it can go at max RPM on track - but will overheat the engine if run at max RPM standing still on a parking lot. Next thing - a car manufacturer don't need all information about a car engine to build a car around it. In a real work, just as in software engineering, the focus is on information hiding. Encapsulation. Locality of reference. Solve what needs to be solved - don't worry about how the car engine will behave if you try to run it in space or under water. How can you predict what will happen to the system when a failure does occur? Interesting question. Lots of hw guys do build prototypes and abuses them until they fail. But your view is that in sw, we shouldn't build any prototypes, since we aren't allowed to code a single code line until we have written the full and complete sepcifications that even includes the result of this abuse. In fact, how can you define a failure before you completely define what "works" means? Interesting question, since it haven't anything to do with your original statement that no code line may be written until there is so much documentation that there are zero need for reengineering. Basically, all systems in the world must have all failures predicted and simulated by brain power alone. For some reason, not too many companies are following that methodology - I wonder why. |