Algorithm (from the name of Persian mathematician Muhammad ibn Musa alKhwarizmi) is an exact stepbystep description of how to solve some type of a problem. Algorithms are basically what programming is all about: we tell computers, in very exact ways (with programming languages), how to solve problems  we write algorithms. But algorithms don't have to be just computer programs, they are simply instruction for solving problems.
Cooking recipes are commonly given as an example of a noncomputer algorithm, though they rarely contain branching and loops, the key features of an algorithm. The so called wallfollower is a simple algorithm to get out of any maze: you just pick either a lefthand or righthand wall and then keep following it. You may write a crazy algorithm basically for any kind of problem, e.g. for how to clean a room or how to pick up a girl, but it has to be precise so that anyone can execute the algorithm just by blindly following the steps; if there is any ambiguity, it is not considered an algorithm.
Interesting fact: contrary to intuition there are problems that are mathematically proven to be unsolvable by any algorithm, see undecidability, but for most practically encountered problems we can write an algorithm (though for some problems even our best algorithms can be unusably slow).
Algorithms are mostly (possibly not always, depending on exact definition of the term) written as a series of steps (or instructions); these steps may be specific actions (such as adding two numbers or drawing a pixel to the screen) or conditional jumps to other steps ("if condition X holds then jump to step N, otherwise continue"). At the lowest level (machine code, assembly) computers can do just that: execute instructions (expressed as 1s and 0s) and perform conditional jumps. These jumps can be used to create branches (in programming known as ifthenelse) and loops. Branches and loops are together known as control structures  they don't express a direct action but control which steps in the algorithm will follow. All in all, any algorithm can be written with only these three constructs:
Note: in a wider sense algorithms may be expressed in other ways than sequences of steps (nonimperative ways, see declarative languages), even mathematical equations are often called algorithms because they imply the steps towards solving a problem. But we'll stick to the common meaning of algorithm given above.
Additional constructs can be introduced to make programming more comfortable, e.g. subroutines/functions (kind of small subprograms that the main program uses for solving the problem), macros (shorthand commands that represent multiple commands) or switch statements (selection but with more than two branches). Loops are also commonly divided into several types such as: counted loops, loops with condition and the beginning, loops with condition at the end and infinite loops (for
, while
, do while
and while (1)
in C, respectively)  in theory there can only be one generic type of loop but for convenience programming languages normally offer different "templates" for commonly used loops. Similarly to mathematical equations, algorithms make use of variables, i.e. values which can change and which have a specific name (such as x or myVariable).
Practical programming is based on expressing algorithms via text, but visual programming is also possible: flowcharts are a way of visually expressing algorithms, you have probably seen some. Decision trees are special cases of algorithms that have no loops, you have probably seen some too. Even though some languages (mostly educational such as Snap) are visual and similar to flow charts, it is not practical to create big algorithms in this way  serious programs are written as a text in programming languages.
Let's write a simple algorithm that counts the number of divisors of given number x and checks if the number is prime along the way. (Note that we'll do it in a naive, educational way  it can be done better). Let's start by writing the steps in plain English:
Notice that x, divisor counter and currently checked number are variables. Step 4 is a loop (iteration) and steps a and 6 are branches (selection). The flowchart of this algorithm is:
START

V
read x

V
set divisor count to 0

V
set checked number to 1

.>
 
 V no
 checked number <= x ? .
  
  yes 
 V 
 checked number no 
 divides x ? . 
   
  yes  
 V  
 increase divisor  
 count by 1  
   
   
 <' 
  
 V 
 increase checked V
 number by 1 print divisor count
  
'' 
V no
divisor count = 2 ? .
 
 yes 
V 
print "number is prime" 
 
<'
V
END
This algorithm would be written in Python as:
x = int(input("enter a number: "))
divisors = 0
for i in range(1,x + 1):
if x % i == 0: # i divides x?
divisors = divisors + 1
print("divisors: " + str(divisors))
if divisors == 2:
print("It is a prime!")
in C as:
#include <stdio.h>
int main(void)
{
int x, divisors = 0;
scanf("%d",&x); // read a number
for (int i = 1; i <= x; ++i)
if (x % i == 0) // i divides x?
divisors = divisors + 1;
printf("number of divisors: %d\n",divisors);
if (divisors == 2)
puts("It is a prime!");
return 0;
}
and in comun as (for simplicity only works for numbers up to 9):
< "0"  # read X and convert to number
0 # divisor count
1 # checked number
@@
$0 $3 > ? # checked num. > x ?
!@
.
$2 $1 % 0 = ? # checked num. divides x ?
$1 ++ $:2 # increase divisor count
.
++ # increase checked number
.
0 "divisors: " > # write divisor count
$1 "0" + > 10 >
$1 2 = ?
0 "It is a prime" > 10 >
.
This algorithm is however not very efficient and could be optimized  for example there is no need to check divisors higher than the square root of the checked value (mathematically above square root the only divisor left is the number itself) so we could lower the number of the loop iterations and so make the algorithm finish faster.
Algorithms are the essence of computer science, there's a lot of theory and knowledge about them.
Turing machine, a kind of mathematical bareminimum computer, created by Alan Turing, is the traditional formal tool for studying algorithms, though many other models of computation exist. From theoretical computer science we know not all problems are computable, i.e. there are problems unsolvable by any algorithm (e.g. the halting problem). Computational complexity is a theoretical study of resource consumption by algorithms, i.e. how fast and memory efficient algorithms are (see e.g. P vs NP). Mathematical programming is concerned, besides others, with optimizing algorithms so that their time and/or space complexity is as low as possible which gives rise to algorithm design methods such as dynamic programming (practical optimization is a more pragmatic approach to making algorithms more efficient). Formal verification is a field that tries to mathematically (and sometimes automatically) prove correctness of algorithms (this is needed for critical software, e.g. in planes or medicine). Genetic programming and some other methods of artificial intelligence try to automatically create algorithms (algorithms that create algorithms). Quantum computing is concerned with creating new kinds of algorithms for quantum computers (a new type of stillinresearch computers). Programming language design is the art and science of creating languages that express computer algorithms well.
Following are some common algorithms classified into groups.
All content available under CC0 1.0 (public domain). Send comments and corrections to drummyfish at disroot dot org.