??? 02/24/11 14:54 Read: times |
#181297 - I don't ever build on target hardware unless target is a PC Responding to: ???'s previous message |
I have been working for a number of years with a Linux-based platform.
But I do 100% development on a PC running Linux - or a standard Linux installed in a VMware virtual machine. A large percentage of the code can be debugged on the PC (since a large percentage of Linux applications can be compiled and run on a large set of Linux-supporting platforms). The Linux have drivers for most peripherials in which case I don't need to debug any drivers. I just have the application use similar devices on the Linux of the PC. I sometimes creates virtual devices on the PC, so I can simulate input from hardware on the real target. For example a GUI showing all digital inputs/outputs as graphical LED:s and switches and with knobs or sliders for analog values. But the GNU debugger gdb can also perform network debugging, in which case you can run the debugger on a PC (Windows or Linux) and debug a Linux application on the Linux target. Doing the source code management, building, most of the debugging etc on a PC is way faster, easier, streamlined, ... A PC can potentially stream 50MB of log data/second from an application while still run the application at the real-time speed required for the intended target hardware. A target hardware don't have huge excess capacities unless I force the customer to buy a device that is seriously overspeced for the intended job. But all the above is for developing software for embedded use on a Linux target. Lost of the development are for non-Linux targets, i.e. normal ARM7 or Cortex-class processors. In that case, a typical target hardware may have 32kB of RAM and 256kB of flash. So the programs are built on the PC (the target hardware isn't even capable of running a real tool chain). And the upper layer logic can be debugged as modules on any PC (Windows or Linux doesn't matter as long as the code can be compiled for that target - uint32_t is a 32-bit integer supported by all standard-compliant C compilers so code making use of it can be tested anywhere). The lower level code that makes use of interrupt handlers, FIFOs, magic SFR etc have to be debugged on the target hardware. Easy to do with a JTAG interface. And once the interrupt-driven FIFO-using UART code does work, I don't need to worry about it anymore. One day to fix support for UART0, UART1, UART2, UART3 and put into my "library" for the specific processor. One or more days to add master-side SPI support, or ADC support etc. The big difference between an ARM and a 8051 is that the ARM instruction set works so much better with high-level languages. So I can write C code for all devices, and implement them as some form of driver layer. And then add the business logic on top. And the mapping between C and actual hardware will mange to produce a compound binary that will still run (for most of the time) only some percent slower than a fully hand-written assembler module where the business logic is glued into the hw access code for maximum optimization. So Linux targets are only used when the product needs that type of functionality. Maybe because it is expected to run with multiple Ethernet interfaces and perform firewalling. Maybe it is expected to be able to remotely download new software applications to learn new tricks years after the equipment was installed out in the field, using a GPRS, HSDPA, HSUPA, CDMA, ... modem. Most of the time, I see ARM chips as similar building blocks as standard 8051 chips, and costing maybe $5 and down to less than $1. And which ARM that gets used depends on number of pins, and specific peripherials implemented, amount of RAM + flash etc in relation to the specific project. When I can, I prefer to stay in same family since that allows me to keep the low-level code almost unchanged. As a foot note - even for 8051 code, I always look into what functions I can compile and test with a PC compiler allowing the build machine to perform automatic code testing besides the system testing on the real hardware. The real hardware is of course the only (as I see it) practical way to introduce the specific timing restrictions. |