??? 10/22/10 13:39 Read: times |
#179275 - who is "you"? Responding to: ???'s previous message |
Justin Fontes said:
The sad part is that if you did use the other method of defining like the Keil example code, that would not help if "i" was defined as just char instead of unsigned char. Who is "you"? For a << loop, signed and unsigned variables works well. For a >> loop, a lot of interesting things may result, which is also something I did mention in my earlier post. A signed 8-bit value 0x80 for a platform with two-complement numeric format is negative. So if using a compiler that always follows the standard and promotes the number to int size, the value gets promoted to 0xff80 before the shift. That means that the low 8 bits will shift in a new sign bit, and become 0xc0 after the shift, instead of the intended 0x40. Note here that the Keil compiler is a bit special, since it supports a deviation from standard C by not performing any integer promotion before using 8-bit numbers. It saves a lot of time and code size but does result in differences for some expressions. Without any integer promotion, a signed value 0x80 can be shifted to either 0x40 or 0xc0 depending on choices made by the compiler vendor. Since we have already left the standard by skipping integer promotion, we can't rely on the standard for the resulting decisions by the compiler. That is why it is always important to think twice before using >> with signed data, independently of if the compiler uses standard integer promotion or not, and independently of if the processor has two-complement integers or not. But generally, people should always think twice before using the native data types when programming. stdint.h is a very nice file to look closer at. These data types can then be used as building blocks for creating own datatypes. The type "char" should only be used for a character, an array of characters or a pointer to character data. And if writing PC software, the char isn't even good for that, since the program should probably have been written for handling Unicode data instead - the world has more languages than english. But char, short and int are not really good for loop variables, or storing state info etc. They don't have well-defined sizes and char are sometims signed and sometimes unsigned. A char may just as well be 9 bits or something else - in which case sizeof(short) may still be 2 meaning that the type "short" have 18 bits. For a 8051, it's easier to make assumptions about data types - a 9-bit char would not be much compatible with an 8-bit port or 8-bit accumulator. But even when we do know a bit about the target hardware, we can't always know if some of the source code may need to be ported to other hardware. So we should always try to keep down the number of hidden assumptions. If we use a data type uint8_t, we have not hidden the assumption that we have an 8-bit unsigned entity. If using uint_fast16_t, we haven't hidden that the code expects at least a 16-bit unsigned integer, but is fine with a 32-bit or 64-bit integer if the target platform works more efficiently with larger data types. With int_least16_t, we know that we have a signed integer handling at least 16 bits, but that speed isn't important so it's fine if the compiler uses a variable closest in size even if it takes extra instructions to convert from/to 32-bit or 64-bit integers while performing computations. Several coding standards don't even accept code based on the standard C data types, because of the hidden assumptions. |