??? 08/12/11 20:07 Read: times |
#183342 - I could, but don't feel it's a good idea Responding to: ???'s previous message |
In some situations, I could. Having a full Linux platform, I could mount in a remote disk with compiler, debugger, source code and everything I would like.
But I feel it isn't a good idea. It's better to cross-compile the source on a PC. And it's better to just network any debugging information to the PC, but leave almost 100% of the embedded CPU processor capacity (and RAM) to the real embedded application, when testing on the real hardware. Especially since the main reason for testing on the real hardware is to have the specific speed limitations and RAM limitations and response times instead of the almost infinite resources the PC has. Besides - the embedded systems I work with normally don't have a graphics card. So what interface should be abused for gdb to present the results? Much easier to just network the requests for a memory dump and have the PC convert to hex and present (using a graphical gdb front-end) on a high-resolution monitor. And since an embedded system running Linux is so similar to a PC running Linux, I can do (if I need to) maybe 90 to 99.9% of the debugging on a real PC. If I want or need to. It's trivial to create a test program on the PC that fakes reception of CAN data. Or GPS data. And the PC can tunnel serial data for a GPRS/3G/CDMA/... modem to the real hardware if I want the PC to have same bandwidth-limitations and run the identical code to handle restart of modem, restart of session etc. Richards approach means he must have a custom device with extra capabilities compared to the tens of thousands of units that you may want to ship to customers. Or all shipped units must have the over-capacity to not only run a service, but also concurrently run compiler, debugger, ... Debugging with printouts or with a network-connected gdb on the PC means that any customer terminal can be used for testing a program. Quite important if the equipment is physically installed in a car, or in a control cabinet for traffic light control or similar. Being able to use any terminal for debugging means that real external stimuli (including the results of kA loads being connected/disconnected) can be there when testing why a real installation behaves differently from the lab bench setup. When you get to networked Linux-class platforms, there is also way more things that can be done remotely. Remote updates of programs. Remote reconfiguration. Posibillity to turn on data logging and then request a retrieval of what a system has seen or done even during times when the system has been in low-power modes or in radio shadow. But using standard "PC-class" tools and try to write as large percent of the code as possible to behave as if run on a standard Linux-based PC means that a larger percentage of the system can be directly moved to other hardware, in case there is a need. Maybe PPC -> ARM. Maybe a terminal with LVD interface for LCD panels. Maybe a terminal with touch-screen support. Maybe a terminal with more CAN interfaces. Maybe a terminal with different networking layers. Moving the development tools into the platform may give a bit of tunnel vision. Everything circling around that specific hardware even if it is known that real hardware tends to be replaced quite often, while many more complex systems have software that must live many hw generations just for the simple reasons that there are many years of multiple developers times for the development of the full concept. |