There are 4 main types of symbols. A symbol is just a fancy name for something with a name.
We have:
- Functions
- Variables
- Definitions
- Classes
$HL$//This is a variable
I64 my_var;
//This is another variable of a different $UL,1$type$UL,0$.
U8 my_u8;
$HL,0$
Variables are declared as certain $UL,1$types$UL,0$. There are types for numbers. You just saw two of them above, I64 and U8.
The compiler gives us 8 types for numbers. These types are different $UL,0$$UL,1$sizes$UL,0$ of numbers, and if they are $UL,1$signed$UL,0$ or $UL,0$$UL,1$unsigned$UL,0$.
A $UL,1$byte$UL,1$$UL,0$ is 8 bits. The range of numbers you can store in a byte is 0 to 255. In a signed byte that range is from -128 to 127.
* Unsigned types cannot be a negative value.
* Only signed types can store negative numbers.
Now you might be thinking, how does it know if a number is negative or not? After all, a number is just a series of bits!
The way we store a signed number is we reserve a single bit in the number to tell us whether or not it is signed.
If we have a number that is 8 bits wide, then we have to use a single bit as a $UL,0$$UL,1$sign bit$UL,0$.
The sign bit is always the highest bit available.
In a signed byte, bits #0-6 will be used to store the actual value of the number.
Bit #7 will be the sign bit. If the sign bit is 1, then it is a negative number.
# 7 6 5 4 3 2 1 0
$FG,4$0b 0 0 0 0 0 0 1 1$FG$ => $FG,1$3$FG$
| | |
sign the number "3" in binary
If it a number is signed that means that it cannot use all of its bits to represent a value.
Therefore, the range of numbers it can represent is split between the negative side and the positive side of numbers.