# Zero

Zero (0) is a number signifying the absence of a thing we count. Among integers it precedes 1 and follows -1.

• It is even.
• It is neither positive nor negative, it lies exactly on the boundary of positive and negative numbers. However in some computer numeric encodings (such as one's complement) there exist two representations of zero and so we may hear about a positive and negative zero, even though mathematically there is no such a thing.
• It is a whole number, a natural number, a rational number, a real number and a complex number.
• It is NOT a prime number.
• It is an additive identity, i.e. adding 0 to anything has no effect. Subtracting 0 from anything also has no effect.
• Multiplying anything by 0 gives 0.
• Its representation in all traditional numeral systems is the same: 0.
• 0^x (zero to the power of x), for x not equal to 0, is always 0.
• x^0 (x to the power of 0), for x not equal to 0, is always 1.
• 0^0 (0 to the power of 0) is generally not defined! However sometimes it's convenient to define it as equal to 1.
• In programming we start counting from 0 (unlike in real life where we start with 1), so we may encounter the term zeroth item. We count from 0 because we normally express offsets from the first item, i.e. 0 means "0 places after the first item".
• It is, along with 1, one of the symbols used in binary logic and is normally interpreted as the "off"/"false"/"low" value.
• Its opposite is most often said to be the infinity, even though it depends on the angle of view and the kind of infinity we talk about. Other numbers may be seen as its opposite as well (e.g. 1 in the context of probability).
• As it is one of the most commonly used numbers in programming, computers sometimes deal with it in special ways, for example in assembly languages there are often special instructions for comparing to 0 (e.g. `NEZ`, not equals zero) which can save memory and also be faster. So as a programmer you may optimize your program by trying to use zeros if possible.
• In C and many other languages 0 represents the false value, a function returning 0 many times signifies an error during the execution of that function. However 0 also sometimes means success, e.g. as a return value from the main function. 0 is also often used to signify infinity, no limit or lack of value (e.g. NULL pointer normally points to address 0 and means "pointing nowhere").
• Historically the concept of number zero seems to have appeared at least 3000 BC and is thought to signify an advanced abstract thinking, though it was first used only as a positional symbol for writing numbers and only later on took the meaning of a number signifying "nothing".

Dividing by zero is not defined, it is a forbidden operation mainly because it breaks equations (allowing dividing by zero would also allow us to make basically any equation hold, even those that normally don't). In programming dividing by zero typically causes an error, crash of a program or an exception. In some programming languages floating point division by zero results in infinity or NaN. When operating with limits, we can handle divisions by zero in a special way (find out what value an expression approaches if we get infinitely close to dividing by 0).