The basics of the CosmiC language. Requirements: Knowledge of basic integer mathematics and number sense.

There are 4 main types of symbols. A symbol is just a fancy name for something with a name.

We have:
    - Functions
    - Variables
    - Definitions
    - Classes

//This is a variable
I64 my_var;

//This is another variable of a different type.
U8 my_u8;

Variables are declared as certain types. There are types for numbers. You just saw two of them above, I64 and U8.

The compiler gives us 8 types for numbers. These types are different sizes of numbers, and if they are signed or unsigned.

A byte is 8 bits. The range of numbers you can store in a byte is 0 to 255. In a signed byte that range is from -128 to 127.

* Unsigned types cannot be a negative value.
* Only signed types can store negative numbers.

Now you might be thinking, how does it know if a number is negative or not? After all, a number is just a series of bits!

The way we store a signed number is we reserve a single bit in the number to tell us whether or not it is signed.

If we have a number that is 8 bits wide, then we have to use a single bit as a sign bit.
The sign bit is always the highest bit available.

In a signed byte, bits #0-6 will be used to store the actual value of the number.

Bit #7 will be the sign bit. If the sign bit is 1, then it is a negative number.

#   7   6   5   4   3   2   1   0

0b  0   0   0   0   0   0   1   1   =>  3
    |                       |   |
  sign   the number "3" in binary                   


If it a number is signed that means that it cannot use all of its bits to represent a value.
Therefore, the range of numbers it can represent is split between the negative side and the positive side of numbers.

                                      Unsigned byte range

                                |----------------------------|
                       Signed byte range
                  |--------------------------|
-255            -127            0           127             255


Now, you can imagine how this plays out with numbers that are bigger than 1 byte.

For 2-byte numbers, the unsigned range becomes 0 to 65535, and the signed range becomes -32768 to 32767. 
The sign bit is bit #15.

Unsigned number types:

    U8  -- 1 byte (8 bits) unsigned number
    I8  -- 1 byte (8 bits - 1 sign bit = 7 bits) signed number

    U16 -- 2 byte (16 bits) unsigned number
    I16 -- 2 byte (16 bits - 1 sign bit = 15 bits) signed number

    U32 -- 4 byte (32 bits) unsigned number
    I32 -- 4 byte (32 bits - 1 sign bit = 31 bits) signed number

    U64 -- 8 byte (64 bits) unsigned number
    I64 -- 8 byte (64 bits - 1 sign bit = 63 bits) signed number


Try it out on the command line:

Declare a I8 variable named 'x' and assign(=) it the value 3, and press ENTER.

C:/Home>I8 x = 3;

Now type Bts(&x, 7); and press ENTER. This will set bit #7 to 1. Bts is "Bit test and set". Don't forget the &.

Now if you want to see the value of x, you can simply type x; and press ENTER.

C:/Home>x;
0.000007s ans=0xFFFFFFFFFFFFFF83=-125

Ignore the extra F's for now. You are interested in this hexadecimal '83' value. 

So what happened? When you flipped the sign bit, the number 3 became -125. It started counting up from -128, instead of 0.

You can type "%8tb\n", x; to print out the number in binary, if you wish.

For most purposes in programming, an I64 number will work fine.

You are currently reading this on a 64-bit operating system, running on a 64-bit computer.
We hear about the greatness of 64-bit machines all the time, but what exactly does that mean?

The CPU is designed to work with 64-bit numbers, natively.