??? 11/30/10 23:11 Read: times |
#179768 - There's lot of analysis involved - about money Responding to: ???'s previous message |
Richard said:
[...] I can believe quite easily that it comes largely from lack of focus on simple measures of efficiency. Most of the reasons are: - The tools vendors wants us to use their new favourite interpreted or byte-coded or JIT-compiled language. - The tools vendors or OS vendors tries to get us to use their new faviourite object-based super-know-everything general-purpose framework (that also happens to be written using above mentioned faviourite language) - The academic world is busy telling us how dangerous malloc()/free() or new/delete is, so they want us to use a language with automatic garbage collection. Lots of dirty old memory blocks gets collected until the garbage man comes. - Since the academic world likes small, trivial, general rules, we get languages that can't create combined data structures without sub-allocating every non-native data type. So you get a situation where you can't create an array of 100 character arrays, but instead need an array of 100 pointers to 100 allocated strings. And you can't get a struct containing a string or a new struct or similar - just integers, floats or pointers to structs or strings. Huge number of memory allocations. But don't wory - the garbage team will some day come and collect... - Since memory is cheap and disks slow, we add a GUI and read in all our large full-color background bitmaps in memory. Oops - did we need to swap them out to the swap partition. All following the common academic suggestions - you don't care about loading anything. You just give the name of that huge bitmap to the constructor of a magic object. No one saw any memory allocation. So no one wondered why 1024 * 768 * 32-bit required 3MB of memory after decompress of the JPEG/PNG/... original format. But who cares anyway, when we start with having a 1920x1200*32-bit desktop background, consuming 9MB. - Liking nice, regular, interfaces to everything, that little list of 1000 customers isn't displayed in a user-draw region of the screen (remember that in GUI mode we really have to think about paint regions). No, that nice GUI framework has a class for a list view, that is a superset of a view. And there is a table view superset that adds support for columnar data. So now 1000 customers times 5 columns of data means the program calls a nice function to add the line for each customer and then four nice functions to add the four columns of extra data. Suddenly, the program has dynamically allocated 1000 line objects, containing 5000 column objects and each column object contains an allocated text string and potentially an allocated struct with properties to use when displaying the information. Whoops - 5-10MB of data to give a list of the 1000 customers you have in your order stock. The problem isn't lazy developers. The problem is that the academic world are good at math. They are good at noticing the price and size of memory. And noticing that when the clock frequency isn't increasing, then the core count is instead. And then we get nice printed books, that people read. And if they haven't already a flashy GUI toolkit implemented in a language where precisely everything is dynamically allocated, then they will get hot about the nice ideas they pick up from the books. So we suddenly get a couple of more companies with OS or frameworks or languages that practically perform a dynamic allocation for every character you write. And since the framework vendors things it's important that it is trivial to program, they realize that it's hard to keep track of all the events an OS - or framework object - may generate. So best is to flood-fill your framework with events. For many years, you could identify programs written with the Microsoft Foundation Classes (MFC) by looking at the CPU meter while moving the mouse over the application. If the CPU load spiked (often up to 100% for the older single-core non-threaded machines where a single application really could consume 100% of the total capacity), then the probability was high that it was a MFC application. And when Microsoft sent out a program where the CPU load didn't spike, then you could be pretty convinced that the original source code was so old that they hadn't bothered to rewrite it to use MFC. Events and objects and dynamic memory makes it very quick to program. Sometimes too quick. Especially since the managers knows that the company must use the coolest products. And especially since no one is wondering where the processor capacity or memory did go - after all, all programs behaves the same. But back to square one. How do you compete, if your competitor uses tools that makes you "rapid-prototype" the program in 3 months, and you use tools that makes you need 18 months to produce an application that does the same but at a tenth of the memory consumption? How much market shares did you get during the 15 months you didn't have a product on the market? And how much market shares do you get when your lean and mean application finally reaches the market, but the competitor have used some of the money from the 15 months of sales to add a couple of new "killer" features? Especially since 40% of the customer base have upgraded their computers during this time, so the extra memory consumption doesn't really matter for them (until they upgrade that application one or two generations further - which is a completely different issue). The commercial world is about making money. Not about making the "best" product after the market has already been swamped by competitors. A private citizen can have the goal to make the "best" product. The people with the money will normally make sure that all "real" companies instead focuses on a route that gives best return on their investments. So a consultant or an employee just have to optimize for corporate economic gain - not for cheapest or most resource-constrained product. Or said consultant or employee will suddenly be "available". |