thanks to the internet... nice practical example...
A signed integer is one with either a plus or minus sign
in front. That is it can be either positive or negative.
An unsigned integer is assumed to be positive.
This is important in computing because the numbers are
stored (usually) as a fixed number of binary digits. For
a signed integer one bit is used to indicate the sign
- 1 for negative, zero for positive. Thus a 16 bit signed
integer only has 15 bits for data whereas a 16 bit unsigned
integer has all 16 bits available. This means unsigned
integers can have a value twice as high as signed integers
(but only positive values). On 16 bit computers this was
significant, since it translates to the difference between
a maximum value of ~32,000 or ~65,000. On 32 bit computers
its far less significant since we get 2 billion or 4 billion.
And on 64 bit computers it becomes of academic interest.
No comments:
Post a Comment