??? 07/16/09 16:36 Read: times Msg Score: +2 +2 Good Answer/Helpful |
#167430 - Complexity - not size - affects development time/cost Responding to: ???'s previous message |
You still live in a world where the time to implement something is proportional to the number of lines of code, or possibly proportional to the number of kB of code space.
In real life, you can have a block of code that are just acouple to thousand lines of code, and requiring 10-20kB of code space, but that contains one or more man years of know-how. And you can have 100 lines of code that takes two weeks to perfect. From your posts, you are a coder. You just follow a list: Print x. Set y. Wait until z. A lot of people are not coders but developers. There are way more things to developing than just coding. You may need to translate a seemingly trivial requirement into a very complex algorithm, spending weeks just optimizing parameters. But the next thing here is that testing all branches of a 1000-line C program, including full-range analysis of variables, is way easier than performing the same operation on the corresponding assembler program. That was one of the reasons why many HLL spends such focus on locality-of-reference, encapsulation/namespaces and data types. Your programs seem to contain tables of data to just send out. But what if every single line is real code, making real computations or decisions? If I produce 10k units of a product, and can save $1 by selecting a smaller processor, then I will only gain something if I can prove that I will not reach the $10k in extra development time, maintainance, support, warranties. If you count $100/hour, these $10k are only 100 hours. Exactly how many lines of assembler code do you manage to document in 3 months, if we ignore the coding? How do you document the testing? I took a look at some project. One was 9k source lines in C. > 1000 conditionals, 270 loops. Resulted in 60kB flash use for a PIC. About 6.5 byte/source line. One was 16k source lines in C. > 2600 conditionals, 600 loops. Resulted in 80kB flash use for an ARM. About 5 byte/source line. One was 15k source lines in C > 1900 conditionals, 625 loops. Resulted in 97kB flash use for an ARM. About 6.5 byte/source line. One was 4800 source lines of asm. About 350 conditionals/loops. Resulted in 4760 bytes flash use for an AVR. About 1 byte/source line. If I should evaluate the complexity of the above projects, I would say that the comparative complexity/functionality follows quite linear to the number of source lines. The first project in the list is about twice as complex (amount of functionality, documentation, ...) as the last. All of the above did take 4-6 months each, with the majority of time spent documenting, cross-correlating requirements with code, modifying code and documentation based on customer feedback, creating test reports, ... What they have in common is that they have a quite high percentage of conditionals. Of 15k C lines, there are about 2600 lines with "if", "else" or "case". And about 600 loops. Each and every one of them represents a "corner case" that has to be explicitly tested and documented to catch off-by-one errors, worst-case running times, ... The interesting thing here is that the 5k of assembler for the AVR chip has less than 30% of the features of one of the C projects mentioned above, despite having similar development time/costs. Even more interesting? The ARM chip was cheaper. And the C code (which about three times the functionality) has had maybe 40% of the code reused by a totally different product and been spun off to a different platform with other hardware. The AVR assembler solution has only consumed 60% of the flash, and 70% of RAM. But no one is interested in spending the time and money trying to teach that product some of the tricks the newer product can. The ARM implementation is at 30% flash, which means that it can store two complete images (allowing safe over-the-air updates), and still allow the application to grow with over 60%. But the total cost for a product can't just be seen as the number of kB of flash consumed. On one hand, the price of the chips does not follow the size of the flash or RAM, unless you compare two chips within the same family. On the other hand, the maintainance cost of a product can represent several times the cost of developing the initial software. Don't say the maintainance cost is because of bad programmers producing buggy code. In real life, you often get request to modify a product. Extend it with new features. Possibly create a hardware variant with different numbers of I/O or adding a new interface. Maybe a move to a newer processor to allow reduced power consumption or updating a 10Mbit ethernet interface to 100Mbit. A lamp timer may be possible to produce in 100k units and never touch the software. But a lot of products requires constant updates because of unknown external factors. A large number of products can't just be seen as commodity products, where all manufacturers have identical products, and where price or warranty or color of the box is the deciding factor. Quite a number of products are either technological races - who is first with GPRS? Edge? 3G? HSUPA? Who is best at releasing at responding to specific customer requirements, releasing adaptations? Who has best availability on installed products? Who can best show that every dollar spent in buying a product will result in x dollars of savings each year? If a customer can save $1k each year, they will not care if the product they have to buy costs $400 or $450. But they will care about the delivery date - how soon they can get their 1k units adapted to their specific requirements, since their savings will not start ticking until they have the new equipment installed. And they will care about reputation. And about reference customers. |