by drummyfish, generated on 03/31/23, available under CC0 1.0 (public domain)


21st Century

21st century, known as the Age Of Shit, is already one of the worst centuries in history despite only being around shortly.


3D Modeling

The topic of 3D modeling will be part of article about 3D models.


3D Model


3D Modeling: Learning It And Doing It Right


Do you want to start 3D modeling? Or do you already know a bit about it and just want some advice to get better? Then let us share a few words of advice here.

Nowadays as a FOSS user you will most likely do 3D modeling with Blender -- we recommended it to start learning 3D modeling as it is powerful, free, gratis, has many tutorials etc. Do NOT use anything proprietary no matter what anyone tells you! Once you know a bit about the art, you may play around with alternative programs or approaches (such as writing programs that generate 3D models etc.). However as a beginner just start with Blender, which is from now on in this article the software we'll suppose you're using.

Start extremely simple and learn bottom-up, i.e. learn about fundamentals and low level concepts and start with very simple models (e.g. simple untextured low-poly shape of a house, box with a roof), keep creating more complex models by small steps. Do NOT fall into the trap of "quick and easy magic 3D modeling" such as sculpting or some "smart apps" without knowing what's going on at the low level, you'll end up creating extremely ugly, inefficient models in bad formats, like someone wanting to create space rockets without learning anything about math or physics first. Remember to practice, practice, practice -- eventually you learn by doing, so try to make small projects and share your results on sites such as opengameart to get feedback and some mental satisfaction and reward for your effort. The following is an outline of possible steps you may take towards becoming an alright 3D artist:

  1. Learn what 3D model actually is, basic technical details about how a computer represents it and roughly how 3D rendering works. It is EXTREMELY important to have at least some idea about the fundamentals, i.e. you should learn at least the following:
  1. Manually create a few extremely simple low-poly untextured models, e.g. that of a simple house, laptop, hammer, bottle etc. Keep the vertex and triangle count very low (under 100), make the model by MANUALLY creating every vertex and triangle and focus only on learning this low level geometry manipulation well (how to create a vertex, how to split an edge, how to rotate a triangle, ...), making the model conform to good practice and get familiar with tools you're using, i.e. learn the key binds, locking movement direction to principal axes, learn manipulating your 3D view, setting up the free/side/front/top view with reference images etc. Make the model nice! I.e. make it have correctly facing triangles (turn backface culling on to check this), avoid intersecting triangles, unnecessary triangles and vertices, remove all duplicate vertices (don't have multiple vertices with the same position), connect all that should be connected, avoid badly shaped triangles (e.g. extremely acute/long ones) etc. Also learn about normals and make them nice! I.e. try automatic normal generation (fiddle e.g. with angle thresholds for sharp/smooth edges), see how they affect the model look, try manually marking some edges sharp, try out smoothing groups etc. Save your final models in OBJ format (one of the simplest and most common formats supporting all you need at this stage). All this will be a lot to learn, that's why you must not try to create a complex model at this stage. You can keep yourself "motivated" e.g. by aiming for creating a low-poly model collection you can share at opengameart or somewhere :)
  2. Learn texturing -- just take the models you have and try to put a simple texture on them by drawing a simple image, then unwrapping the UV coordinates and MANUALLY editing the UV map to fit on the model. Again the goal is to get familiar with the tools and concepts now; experiment with helpers such as unwrapping by "projecting from 3D view", using "smart" UV unwrap etc. Make the UV map nice! Just as model geometry, UV maps also have good practice -- e.g. you should utilize as many texture pixels as possible (otherwise you're wasting space in the image), watch out for color bleeding, the mapping should have kind of "uniform pixel density" (or possibly increased density on triangles where more details is supposed to be), some pixels of the texture may be mapped to multiple triangles if possible (to efficiently utilize them) etc. Only make a simple diffuse texture (don't do PBR, material textures etc., that's too advanced now). Try out texture painting and manual texture creation in a 2D image program, get familiar with both.
  3. Learn modifiers and advanced tools. Modifiers help you e.g. with the creation of symmetric models: you only model one side and the other one gets mirrored. Subdivide modifier will automatically create a higher poly version of your model (but you need to help it by telling it which sides are sharp etc.). Boolean operations allow you to apply set operations like unification or subtraction of shapes (but usually create a messy geometry you have to repair!). There are many tools, experiment and learn about their pros and cons, try to incorporate them to your modeling.
  4. Learn retopology and possibly sculpting. Topology is an extremely important concept -- it says what the structure of triangles/polygons is, how they are distributed, how they are connected, which curves their edges follow etc. Good topology has certain rules (e.g. ideally only being composed of quads, being denser where the shape has more detail and sparser where it's flat, having edges so that animation won't deform the model badly etc.). Topology is important for efficiency (you utilize your polygon budget well), texturing and especially animation (nice deformation of the model). Creating more complex models is almost always done in the following two steps:
  1. Learn about materials and shaders. At this point you may learn about how to create custom shaders, how to create transparent materials, apply multiple textures, how to make realistic skin, PBR shaders etc. You should at least be aware of basic shading concepts and commonly encountered techniques such as Phong shading, subsurface scattering, screen space effects etc. because you'll encounter them in shader editors and you should e.g. know what performance penalties to expect.
  2. Learn animation. First learn about keyframes and interpolation and try to animate basic transformations of a model, e.g. animate a car driving through a city by keyframing its position and rotation. Then learn about animating the model's geometry -- first the simple, old way of morphing between different shapes (shape keys in Blender). Finally learn the hardest type of animation: skeletal animation. Learn about bones, armatures, rigging, inverse kinematics etc.
  3. Now you can go crazy and learn all the uber features such as hair, physics simulation, NURBS surfaces, boob physics etc.

Don't forget to stick to LRS principles! This is important so that your models are friendly to good technology. I.e. even if "modern" desktops don't really care about polygon count anymore, still take the effort to optimize your model so as to not use more polygons that necessary! Your models may potentially be used on small, non-consumerist computers with software renderers and low amount of RAM. Low-poly is better than high-poly (you can still prepare your model for automatic subdivision so that obtaining a higher poly model from it automatically is possible). Don't use complex stuff such as PBR or skeletal animation unless necessary -- you should mostly be able to get away with a simple diffuse texture and simple keyframe morphing animation, just like in old games! If you do use complex stuff, make it optional (e.g. make a normal map but don't rely on it being used in the end).

Good luck with your modeling!


3D Rendering

In computer graphics 3D rendering is concerned with computing images that represent a projected view of 3D objects through a virtual camera.

There are many methods and algorithms for doing so differing in many aspects such as computation complexity, implementation complexity, realism of the result, representation of the 3D data, limitations of viewing and so on. If you are just interested in the realtime 3D rendering used in gaymes nowadays, you are probably interested in GPU-accelerated 3D rasterization with APIs such as OpenGL and Vulkan.

LRS has a 3D rendering library called small3dlib.


A table of some common 3D rendering methods follows, including the most simple, most advanced and some unconventional ones. Note that here we talk about methods and techniques rather than algorithms, i.e. general approaches that are often modified and combined into a specific rendering algorithm. For example the traditional triangle rasterization is sometimes combined with raytracing to add e.g. realistic reflections. The methods may also be further enriched with features such as texturing, antialiasing and so on. The table below should help you choose the base 3D rendering method for your specific program.

The methods may be tagged with the following:

method notes
3D raycasting IO off, shoots rays from camera
2D raycasting IO 2.5D, e.g. Wolf3D
beamtracing IO off
billboarding OO
BSP rendering 2.5D, e.g. Doom
conetracing IO off
"dungeon crawler" OO 2.5D, e.g. Eye of the Beholder
ellipsoid rasterization OO, e.g. Ecstatica
flat-shaded 1 point perspective OO 2.5D, e.g. Skyroads
reverse raytracing (photon tracing) OO off, inefficient
image based rendering generating inbetween views
mode 7 IO 2.5D, e.g. F-Zero
parallax scrolling 2.5D, very primitive
pathtracing IO off, Monte Carlo, high realism
portal rendering 2.5D, e.g. Duke3D
prerendered view angles 2.5D, e.g. Iridion II (GBA)
raymarching IO off, e.g. with SDFs
raytracing IO off, recursive 3D raycasting
segmented road OO 2.5D, e.g. Outrun
shear warp rednering IO, volumetric
splatting OO, rendering with 2D blobs
triangle rasterization OO, traditional in GPUs
voxel space rendering OO 2.5D, e.g. Comanche
wireframe rendering OO, just lines

TODO: Rescue On Fractalus!

TODO: find out how build engine/slab6 voxel rendering worked and possibly add it here (from http://advsys.net/ken/voxlap.htm seems to be based on raycasting)

TODO: VoxelQuest has some innovative voxel rendering, check it out (https://www.voxelquest.com/news/how-does-voxel-quest-work-now-august-2015-update)

Mainstream Realtime 3D

You may have come here just to learn about the typical realtime 3D rendering used in today's games because aside from research and niche areas this kind of 3D is what we normally deal with in practice. This is what this section is about.

Nowadays this kind of 3D stands for a GPU accelerated 3D rasterization done with rendering APIs such as OpenGL, Vulkan, Direct3D or Metal (the last two being proprietary and therefore shit) and higher level engines above them, e.g. Godot, OpenSceneGraph etc. The methods seem to be evolving to some kind of rasterization/pathtracing hybrid, but rasterization is still the basis.

This mainstream rendering uses an object order approach (it blits 3D objects onto the screen rather than determining each pixel's color separately) and works on the principle of triangle rasterization, i.e. 3D models are composed of triangles (or higher polygons which are however eventually broken down into triangles) and these triangles are projected onto the screen according to the position of the virtual camera and laws of perspective. Projecting the triangles means finding the 2D screen coordinates of each of the triangle's three vertices -- once we have thee coordinates, we draw (rasterize) the triangle to the screen just as a "normal" 2D triangle (well, with some asterisks).

Furthermore things such as z-buffering (for determining correct overlap of triangles) and double buffering are used, which makes this approach very memory (RAM/VRAM) expensive -- of course mainstream computers have more than enough memory but smaller computers (e.g. embedded) may suffer and be unable to handle this kind of rendering. Thankfully it is possible to adapt and imitate this kind of rendering even on "small" computers -- even those that don't have a GPU, i.e. with pure software rendering. For this we e.g. replace z-buffering with painter's algorithm (triangle sorting), drop features like perspective correction, MIP mapping etc. (of course quality of the output will go down).

Also additionally there's a lot of bloat added in such as complex screen space shaders, pathtracing (popularly known as raytracing), megatexturing, shadow rendering, postprocessing, compute shaders etc. This may make it difficult to get into "modern" 3D rendering. Remember to keep it simple.

On PCs the whole rendering process is hardware-accelerated with a GPU (graphics card). GPU is a special hardware capable of performing many operations in parallel (as opposed to a CPU which mostly computes sequentially with low level of parallelism) -- this is great for graphics because we can for example perform mapping and drawing of many triangles at once, greatly increasing the speed of rendering (FPS). However this hugely increases the complexity of the whole rendering system, we have to have a special API and drivers for communication with the GPU and we have to upload data (3D models, textures, ...) to the GPU before we want to render them. Debugging gets a lot more difficult.

GPU nowadays are kind of general devices that can be used for more than just 3D rendering (e.g. crypto mining) and can no longer perform 3D rendering by themselves -- for this they have to be programmed. I.e. if we want to use a GPU for rendering, not only do we need a GPU but also some extra code. This code is provided by "systems" such as OpenGL or Vulkan which consist of an API (an interface we use from a programming language) and the underlying implementation in a form of a driver (e.g. Mesa3D). Any such rendering system has its own architecture and details of how it works, so we have to study it a bit if we want to use it.

The important part of a system such as OpenGL is its rendering pipeline. Pipeline is the "path" through which data go through the rendering process. Each rendering system and even potentially each of its version may have a slightly different pipeline (but generally all mainstream pipelines somehow achieve rasterizing triangles, the difference is in details of how they achieve it). The pipeline consists of stages that follow one after another (e.g. the mentioned mapping of vertices and drawing of triangles constitute separate stages). A very important fact is that some (not all) of these stages are programmable with so called shaders. A shader is a program written in a special language (e.g. GLSL for OpenGL) running on the GPU that processes the data in some stage of the pipeline (therefore we distinguish different types of shaders based on at which part of the pipeline they reside). In early GPUs stages were not programmable but they became so as to give a greater flexibility -- shaders allow us to implement all kinds of effects that would otherwise be impossible.

Let's see what a typical pipeline might look like, similarly to something we might see e.g. in OpenGL. We normally simulate such a pipeline also in software renderers. Note that the details such as the coordinate system handedness and presence, order, naming or programmability of different stages will differ in any particular pipeline, this is just one possible scenario:

  1. Vertex data (e.g. 3D model space coordinates of triangle vertices of a 3D model) are taken from a vertex buffer (a GPU memory to which the data have been uploaded).
  2. Stage: vertex shader: Each vertex is processed with a vertex shader, i.e. one vertex goes into the shader and one vertex (processed) goes out. Here the shader typically maps the vertex 3D coordinates to the screen 2D coordinates (or normalized device coordinates) by:
  1. Possible optional stages that follow are tessellation and geometry processing (tessellation shaders and geometry shader). These offer possibility of advanced vertex processing (e.g. generation of extra vertices which vertex shaders are unable to do).
  2. Stage: vertex post processing: Usually not programmable (no shaders here). Here the GPU does things such as clipping (handling vertices outside the screen space), primitive assembly and perspective divide (transforming from [homogeneous coordinates](homogeneous coordinates.md) to traditional cartesian coordinates).
  3. Stage: rasterization: Usually not programmable, the GPU here turns triangles into actual pixels (or fragments), possibly applying backface culling, perspective correction and things like stencil test and depth test (even though if fragment shaders are allowed to modify depth, this may be postpones to later).
  4. Stage: pixel/fragment processing: Each pixel (fragment) produced by rasterization is processed here by a pixel/fragment shader. The shader is passed the pixel/fragment along with its coordinates, depth and possibly other attributes, and outputs a processed pixel/fragment with a specific color. Typically here we perform shading and texturing (pixel/fragment shaders can access texture data which are again stored in texture buffers on the GPU).
  5. Now the pixels are written to the output buffer which will be shown on screen. This can potentially be preceded by other operations such as depth tests, as mentioned above.

TODO: example of specific data going through the pipeline




42 is an even integer with prime factorization of 2 * 3 * 7. This number was made kind of famous (and later overused in pop culture to the point of completely destroying the joke) by Douglas Adams' book The Hitchhiker's Guide to the Galaxy in which it appears as the answer to the ultimate question of life, the Universe and everything (the point of the joke was that this number was the ultimate answer computed by a giant supercomputer over millions of years, but it was ultimately useless as no one knew the question to which this number was the answer).

If you make a 42 reference in front of a TBBT fan, he will shit himself.



4chan (https://4chan.org/) is the most famous image board. As most image boards, 4chan has a nice, oldschool minimalist look, even though it contains shitty captchas for posting and the site's code is proprietary. The site tolerates a great amount of free speech up to the point of being regularly labeled "right-wing extremist site" (although bans for stupid reasons such as harmless pedo jokes are very common, speaking from experience). Being a "rightist paradise" it is commonly seen as a rival to reddit, aka the pseudoleftist paradise -- both forums hate each other to death. The discussion style is pretty nice, there are many nice stories and memes (e.g. the famous greentext stories) coming from 4chan but it can also be a hugely depressing place just due to the shear number of retards with incorrect opinions.

The site consists of multiple boards, each with given discussion topic and rules. The most (in)famous board is random AKA /b/ which is just a shitton of meme shitposting, porn, toxicity, fun, trolling and retardedness.

For us the most important part of 4chan is the technology board known as /g/ (for technoloGEE). Browsing /g/ can bring all kinds of emotion, it's a place of relative freedom and somewhat beautiful chaos where all people from absolute retards to geniuses argue about important and unimportant things, brands, tech news and memes, and constantly advise each other to kill themselves. Sometimes the place is pretty toxic and not good for mental health, actually it is more of a rule than an exception.

As of 2022 /g/ became unreadable, ABANDON SHIP. The board became flooded with capitalists, cryptofascists, proprietary shills, productivity freaks and other uber retards, it's really not worth reading anymore. You can still read good old threads on archives such as https://desuarchive.org/g/page/280004/.


Aaron Swartz

"I think all censorship should be deplored." --Aaron Swartz




Acronym is an abbreviation of a multiple word term by usually appending the starting letters of each word to form a new word.

Here is a list of some acronyms:


Artificial Intelligence

Artificial intelligence (AI) is an area of computer science whose effort lies in making computers simulate thinking of humans and possibly other biologically living beings. This may include making computers play games such as chess, compose music, paint pictures, understand and processing audio, images and text on high level of abstraction (e.g. translation between natural languages), making predictions about complex systems such as stock market or weather or even exhibit a general human-like behavior. Even though today's focus in AI is on machine learning and especially neural networks, there are many other usable approaches and models such as "hand crafted" state tree searching algorithms that can simulate and even outperform the behavior of humans in certain specialized areas.

There's a concern that's still a matter of discussion about the dangers of developing a powerful AI, as that could possibly lead to a technological singularity in which a super intelligent AI might take control over the whole world without humans being able to seize the control back. Even though it's still likely a far future and many people say the danger is not real, the question seems to be about when rather than if.

By about 2020, "AI" has become a capitalist buzzword. They try to put machine learning into everything just for that AI label -- and of course, for a bloat monopoly.

By 2023 neural network AI has become extremely advanced in processing visual, textual and audio information and is rapidly marching on. Networks such as stable diffusion are now able to generate images (or modify existing ones) with results mostly indistinguishable from real photos just from a short plain language textual description. Text to video AI is emerging and already giving nice results. AI is able to write computer programs from plain language text description. Chatbots, especially the proprietary chatGPT, are scarily human-like and can already carry on conversation mostly indistinguishable from real human conversation while showing extraordinary knowledge and intelligence -- the chatbot can for example correctly reason about advanced mathematical concepts on a level much higher above average human. AI has become mainstream and is everywhere, normies are downloading "AI apps" on their phones that do funny stuff with their images while spying on them. In games such as chess or even strategy video games neural AI has already been for years far surpassing the best of humans by miles.



Algorithm (from the name of Persian mathematician Muhammad ibn Musa al-Khwarizmi) is an exact step-by-step description of how to solve some type of a problem. Algorithms are basically what programming is all about: we tell computers, in very exact ways (with programming languages), how to solve problems -- we write algorithms. But algorithms don't have to be just computer programs, they are simply instruction for solving problems.

Cooking recipes are commonly given as an example of a non-computer algorithm, though they rarely contain branching and loops, the key features of an algorithm. The so called wall-follower is a simple algorithm to get out of any maze: you just pick either a left-hand or right-hand wall and then keep following it. You may write a crazy algorithm basically for any kind of problem, e.g. for how to clean a room or how to pick up a girl, but it has to be precise so that anyone can execute the algorithm just by blindly following the steps; if there is any ambiguity, it is not considered an algorithm.

Interesting fact: contrary to intuition there are problems that are mathematically proven to be unsolvable by any algorithm, see undecidability, but for most practically encountered problems we can write an algorithm (though for some problems even our best algorithms can be unusably slow).

Algorithms are mostly (possibly not always, depending on exact definition of the term) written as a series of steps (or instructions); these steps may be specific actions (such as adding two numbers or drawing a pixel to the screen) or conditional jumps to other steps ("if condition X holds then jump to step N, otherwise continue"). At the lowest level (machine code, assembly) computers can do just that: execute instructions (expressed as 1s and 0s) and perform conditional jumps. These jumps can be used to create branches (in programming known as if-then-else) and loops. Branches and loops are together known as control structures -- they don't express a direct action but control which steps in the algorithm will follow. All in all, any algorithm can be written with only these three constructs:

Note: in a wider sense algorithms may be expressed in other ways than sequences of steps (non-imperative ways, see declarative languages), even mathematical equations are often called algorithms because they imply the steps towards solving a problem. But we'll stick to the common meaning of algorithm given above.

Additional constructs can be introduced to make programming more comfortable, e.g. subroutines/functions (kind of small subprograms that the main program uses for solving the problem), macros (shorthand commands that represent multiple commands) or switch statements (selection but with more than two branches). Loops are also commonly divided into several types such as: counted loops, loops with condition and the beginning, loops with condition at the end and infinite loops (for, while, do while and while (1) in C, respectively) -- in theory there can only be one generic type of loop but for convenience programming languages normally offer different "templates" for commonly used loops. Similarly to mathematical equations, algorithms make use of variables, i.e. values which can change and which have a specific name (such as x or myVariable).

Practical programming is based on expressing algorithms via text, but visual programming is also possible: flowcharts are a way of visually expressing algorithms, you have probably seen some. Decision trees are special cases of algorithms that have no loops, you have probably seen some too. Even though some languages (mostly educational such as Snap) are visual and similar to flow charts, it is not practical to create big algorithms in this way -- serious programs are written as a text in programming languages.


Let's write a simple algorithm that counts the number of divisors of given number x and checks if the number is prime along the way. (Note that we'll do it in a naive, educational way -- it can be done better). Let's start by writing the steps in plain English:

  1. Read the number x from the input.
  2. Set the divisor counter to 0.
  3. Set currently checked number to 1.
  4. While currently checked number is lower or equal than x:
  1. Write out the divisor counter.
  2. If divisor counter is equal to 2, write out the number is a prime.

Notice that x, divisor counter and currently checked number are variables. Step 4 is a loop (iteration) and steps a and 6 are branches (selection). The flowchart of this algorithm is:

               read x
       set divisor count to 0
       set checked number to 1
    |            |
    |            V                no
    |    checked number <= x ? ------.
    |            |                   |
    |            | yes               |
    |            V                   |
    |     checked number    no       |
    |       divides x ? -------.     |
    |            |             |     |
    |            | yes         |     |
    |            V             |     |
    |     increase divisor     |     |
    |       count by 1         |     |
    |            |             |     |
    |            |             |     |
    |            |<------------'     |
    |            |                   |
    |            V                   |
    |     increase checked           V
    |       number by 1     print divisor count
    |            |                   |
    '------------'                   |
                                     V             no
                             divisor count = 2 ? -----.
                                     |                |
                                     | yes            |
                                     V                |
                           print "number is prime"    |
                                     |                |

This algorithm would be written in Python as:

x = int(input("enter a number: "))

divisors = 0

for i in range(1,x + 1):
  if x % i == 0: # i divides x?
    divisors = divisors + 1

print("divisors: " + str(divisors))
if divisors == 2:
  print("It is a prime!")

and in C as:

#include <stdio.h>                                                              
int main(void)
  int x, divisors = 0;
  scanf("%d",&x); // read a number

  for (int i = 1; i <= x; ++i)
    if (x % i == 0) // i divides x?
      divisors = divisors + 1;

  printf("number of divisors: %d\n",divisors);
  if (divisors == 2)
    puts("It is a prime!");

  return 0;

Study of Algorithms

Algorithms are the essence of computer science, there's a lot of theory and knowledge about them.

Turing machine, a kind of mathematical bare-minimum computer, created by Alan Turing, is the traditional formal tool for studying algorithms, though many other models of computation exist. From theoretical computer science we know not all problems are computable, i.e. there are problems unsolvable by any algorithm (e.g. the halting problem). Computational complexity is a theoretical study of resource consumption by algorithms, i.e. how fast and memory efficient algorithms are (see e.g. P vs NP). Mathematical programming is concerned, besides others, with optimizing algorithms so that their time and/or space complexity is as low as possible which gives rise to algorithm design methods such as dynamic programming (practical optimization is a more pragmatic approach to making algorithms more efficient). Formal verification is a field that tries to mathematically (and sometimes automatically) prove correctness of algorithms (this is needed for critical software, e.g. in planes or medicine). Genetic programming and some other methods of artificial intelligence try to automatically create algorithms (algorithms that create algorithms). Quantum computing is concerned with creating new kinds of algorithms for quantum computers (a new type of still-in-research computers). Programming language design is the art and science of creating languages that express computer algorithms well.

Specific Algorithms

Following are some common algorithms classified into groups.

See Also



Aliasing is a certain mostly undesirable phenomenon that distorts signals (such as sounds or images) when they are sampled discretely (captured at periodic intervals) -- this can happen e.g. when capturing sound with digital recorders or when rendering computer graphics. There exist antialiasing methods for suppressing or even eliminating aliasing. Aliasing can be often seen on small checkerboard patterns as a moiré pattern (spatial aliasing), or maybe more famously on rotating wheels or helicopter rotor blades that in a video look like standing still or rotating the other way (temporal aliasing, caused by capturing images at intervals given by the camera's FPS).

A simple example showing how sampling at discrete points can quite dramatically alter the recorded result:

' ' ' '.|- - .  |           .--.           ''''|   |
   . '  |      '|.        .'    '.             |   |
.|- - -O+ - -O- -  '     |  O  O  |        .---+---'
||     \|_ _ /     |     |  \__/  |        |   |
|  ' .  | _ _ _._'        '.    .'         |   |____
       ' ' ' '              ''''
  original image      taking every 2nd  taking every 1st

The following diagram shows the principle of aliasing with a mathematical function:

^       original                     sampling period                       
|   |               |               |<------------->|
|   |             _ |           _   |         _     |
| .'|'.         .' '|         .' '. |       .' '.   |
|   |   \     /     | \     /       \     /       \ |
|   |    '._.'      |  '._.'        |'._.'         '|_.'
|   |               |               |               |
|   :               :               :               :
V   :               :               :               :
    :               :               :               :
^   :               :               :               :
|   :               :               :               :
|---o---...____     :               :               :
|   |          '''''o...____        :               :
|___|_______________|______ ''''----o_______________:___
|                                     '''----___    |        
|                                               ''''o---
|     reconstructed                                              

The top signal is a sine function of a certain frequency. We are sampling this signal at periodic intervals indicated by the vertical lines (this is how e.g. digital sound recorders record sounds from the real world). Below we see that the samples we've taken make it seem as if the original signal was a sine wave of a much lower frequency. It is in fact impossible to tell from the recorded samples what the original signal looked like.

Let's note that signals can also be two and more dimensional, e.g. images can be viewed as 2D signals. These are of course affected by aliasing as well.

The explanation above shows why a helicopter's rotating blades look to stand still in a video whose FPS is synchronized with the rotation -- at any moment the camera captures a frame (i.e. takes a sample), the blades are in the same position as before, hence they appear to not be moving in the video.

Of course this doesn't only happen with perfect sine waves. Fourier transform shows that any signal can be represented as a sum of different sine waves, so aliasing can appear anywhere.

Nyquist–Shannon sampling theorem says that aliasing can NOT appear if we sample with at least twice as high frequency as that of the highest frequency in the sampled signal. This means that we can eliminate aliasing by using a low pass filter before sampling which will eliminate any frequencies higher than the half of our sampling frequency. This is why audio is normally sampled with the rate of 44100 Hz -- from such samples it is possible to correctly reconstruct frequencies up to about 22000 Hz which is about the upper limit of human hearing.

Aliasing is also a common problem in computer graphics. For example when rendering textured 3D models, aliasing can appear in the texture if that texture is rendered at a smaller size than its resolution (when the texture is enlarged by rendering, aliasing can't appear because enlargement decreases the frequency of the sampled signal and the sampling theorem won't allow it to happen). (Actually if we don't address aliasing somehow, having lower resolution textures can unironically have beneficial effects on the quality of graphics.) This happens because texture samples are normally taken at single points that are computed by the texturing algorithm. Imagine that the texture consists of high-frequency details such as small checkerboard patterns of black and white pixels; it may happen that when the texture is rendered at lower resolution, the texturing algorithm chooses to render only the black pixels. Then when the model moves a little bit it may happen the algorithm will only choose the white pixels to render. This will result in the model blinking and alternating between being completely black and completely white (while it should rather be rendered as gray).

The same thing may happen in ray tracing if we shoot a single sampling ray for each screen pixel. Note that interpolation/filtering of textures won't fix texture aliasing. What can be used to reduce texture aliasing are e.g. by mipmaps which store the texture along with its lower resolution versions -- during rendering a lower resolution of the texture is chosen if the texture is rendered as a smaller size, so that the sampling theorem is satisfied. However this is still not a silver bullet because the texture may e.g. be shrink in one direction but enlarged in other dimension (this is addressed by anisotropic filtering). However even if we sufficiently suppress aliasing in textures, aliasing can still appear in geometry. This can be reduced by multisampling, e.g. sending multiple rays for each pixel and then averaging their results -- by this we increase our sampling frequency and lower the probability of aliasing.

Why doesn't aliasing happen in our eyes and ears? Because our senses don't sample the world discretely, i.e. in single points -- our senses integrate. E.g. a rod or a cone in our eyes doesn't just see exactly one point in the world but rather an averaged light over a small area (which is ideally right next to another small area seen by another cell, so there is no information to "hide" in between them), and it also doesn't sample the world at specific moments like cameras do, its excitation by light falls off gradually which averages the light over time, preventing temporal aliasing.

So all in all, how to prevent aliasing? As said above, we always try to satisfy the sampling theorem, i.e. make our sampling frequency at least twice as high as the highest frequency in the signal we're sampling, or at least get close to this situation and lower the probability of aliasing. This can be done by either increasing sampling frequency (which can be done smart, some methods try to detect where sampling should be denser), or by preprocessing the input signal with a low pass filter or otherwise ensure there won't be too high frequencies.


Anal Bead

For most people anal beads are just sex toys they stick in their butts, however anal beads with with remotely controlled vibration can also serve as a well hideen one-way communication device. Use of an anal bead for cheating in chess has been the topic of a great cheat scandal in 2022 (Niemann vs Carlsen).



Analog is the opposite of digital.


Analytic Geometry

Analytic geometry is part of mathematics that solves geometric problems with algebra; for example instead of finding an intersection of a line and a circle with ruler and compass, analytic geometry finds the intersection by solving an equation. In other words, instead of using pen and paper we use numbers. This is very important in computing as computers of course just work with numbers and aren't normally capable of drawing literal pictures and drawing results from them -- that would be laughable (or awesome?). Analytic geometry finds use especially in such fields as physics simulations (collision detections) and computer graphics, in methods such as raytracing where we need to compute intersections of rays with various mathematically defined shapes in order to render 3D images. Of course the methods are used in other fields, for example rocket science and many other physics areas. Analytic geometry reflects the fact that geometric and algebraic problem are often analogous, i.e. it is also the case that many times problems we encounter in arithmetic can be seen as geometric problems and vice versa (i.e. solving an equation is the same as e.g. finding an intersection of some N-dimensional shapes).

Fun fact: approaches in the opposite direction also exist, i.e. solving mathematical problems physically rather than by computation. For example back in the day when there weren't any computers to compute very difficult integrals and computing them by hand would be immensely hard, people literally cut physical function plots out of paper and weighted them in order to find the integral. Awesome oldschool hacking.

Anyway, how does it work? Typically we work in a 2D or 3D Euclidean space with Cartesian coordinates (but of course we can generalize to more dimensions etc.). Here, geometric shapes can be described with equations (or inequalities); for example a zero-centered circle in 2D with radius r has the equation x^2 + y^2 = r^2 (Pythagorean theorem). This means that the circle is a set of all points [x,y] such that when substituted to the equation, the equation holds. Other shapes such as lines, planes, ellipses, parabolas have similar equations. Now if we want to find intersections/unions/etc., we just solve systems of multiple equations/inequalities and find solutions (coordinates) that satisfy all equations/inequalities at once. This allows us to do basically anything we could do with pen and paper such as defining helper shapes and so on. Using these tools we can compute things such as angles, distances, areas, collision points and much more.

Analytic geometry is closely related to linear algebra.


Nub example:

Find the intersection of two lines in 2D: one is a horizontal line with y position 2, the other is a 45 degree line going through the [0,0] point in the positive x and positive y direction, like this:

  :        _/ line 2
  :      _/
_2:_____/_______ line 1
  :  _/

The equation of line 1 is just y = 2 (it consists of all points [x,2] where for x we can plug in any number to get a valid point on the line).

The equation of line 2 is x = y (all points that have the same x and y coordinate lie on this line).

We find the intersection by finding such point [x,y] that satisfies both equations. We can do this by plugging the first equation, y = 2, to the second equation, x = y, to get the x coordinate of the intersection: x = 2. By plugging this x coordinate to any of the two line equations we also get the y coordinate: 2. I.e. the intersection lies at coordinates [2,2].

Advanced nub example:

Let's say we want to find, in 2D, where a line L intersects a circle C. L goes through points A = [-3,0.5] and B = [3,2]. C has center at [0,0] and radius r = 2.

The equation for the circle C is x^2 + y^2 = 2^2, i.e. x^2 + y^2 = 4. This is derived from Pythagorean theorem, you can either check that or, if lazy, just trust this. Equations for common shapes can be looked up.

One possible form of an equation of a 2D line is a "slope + offset" equation: y = k * x + q, where k is the tangent (slope) of the line and q is an offset. To find the specific equation for our line L we need to first find the numbers k and q. This is done as follows.

The tangent (slope) k is (B.y - A.y) / (B.x - A.x). This is the definition of a tangent, see that if you don't understand this. So for us k = (2 - 0.5) / (3 - -3) = 0.25.

The number q (offset) is computed by simply substituting some point that lies on the line to the equation and solving for q. We can substitute either A or B, it doesn't matter. Let's go with A: A.y = k * A.x + q, with specific numbers this is 0.5 = 0.25 * -3 + q from which we derive that q = 1.25.

Now we have computed both k and q, so we now have equations for both of our shapes:

Feel free to check the equations, substitute a few points and plot them to see they really represent the shapes (e.g. if you substitute a specific x shape to the line equation you will get a specific y for it).

Now to find the intersections we have to solve the above system of equations, i.e. find such couples (coordinates) [x,y] that will satisfy both equations at once. One way to do this is to substitute the line equation into the circle equation. By this we get:

x^2 + (0.25 * x + 1.25)^2 = 4

This is a quadratic equation, let's get it into the standard format so that we can solve it:

x^2 + 0.0625 * x^2 + 0.625 * x + 1.5625 = 4

1.0625 * x^2 + 0.625 * x - 2.4375 = 0

Note that this makes perfect sense: a quadratic equation can have either one, two or no solution (in the realm of real numbers), just as there can either be one, two or no intersection of a line and a circle.

Solving quadratic equation is simple so we skip the details. Here we get two solutions: x1 = 1.24881 and x2 = -1.83704. These are the x position of our intersections. We can further find also the y coordinates by simply substituting these into the line equation, i.e. we get the final result:

See Also



Anarchism is a socialist political philosophy rejecting any social hierarchy and oppression. Anarchism doesn't mean without rules, but without rulers; despite popular misconceptions anarchism is not chaos -- on the contrary, it strives for a stable, ideal society of equal people that live in peace. It means order without power. The symbols of anarchism include the letter A in a circle and a black flag that for different branches of anarchism is diagonally split from bottom left to top right and the top part is filled with a color specific for that branch.

A great many things about anarchism are explained in the text An Anarchist FAQ, which is free licensed and can be accessed e.g. at https://theanarchistlibrary.org/library/the-anarchist-faq-editorial-collective-an-anarchist-faq-full.

Anarchism is a wide term and encompasses many flavors such as anarcho communism, anarcho pacifism, anarcho syndicalism, anarcho primitivism or anarcho mutualism. Some of the branches disagree on specific questions, e.g. about whether violence is ever justifiable, or propose different solutions to issues such as organization of society, however all branches of anarchism are socialist and all aim for elimination of social hierarchy such as social classes created by wealth, jobs and weapons, i.e. anarchism opposes state (e.g. police having power over citizens) and capitalism (employers exploiting employees, corporations exploiting consumers etc.).

There exist fake, pseudoanarchist ideologies such as "anarcho" capitalism (which includes e.g. so caleed crypto "anarchism") that deceive by their name despite by their very definition NOT fitting the definition of anarchism (just like Nazis called themselves socialists despite being the opposite). Also such shit as "anarcha" feminism are just fascist bullshit. The propaganda also tries to deceive the public by calling various violent criminals anarchists, even though they very often can't fit the definition of a true anarchist.

LRS is an anarchist movement, specifically anarcho pacifist and anarcho communist one.



Anarch is a LRS/suckless first person shooter game similar to Doom, written by drummyfish. It has been designed to follow the LRS principles very closely and set an example of how games, and software in general, should be written.

Tge repo is available at https://codeberg.org/drummyfish/Anarch or https://gitlab.com/drummyfish/anarch. Some info about the game can also be found at the libregamewiki: https://libregamewiki.org/Anarch.

\=MhM@hr  `hMhhM///@@@@@@@@@@@@@@@@@@@//@@@@@@rMM@n\M=:@M\\\\Mh\\\hr\n\--h-::r:r
:Mh@M@@`  `rh@\@///@@@@@@@@@@@@@@@@@@@@@@@@@@@Mr\@@\h@:\h\h@\Mhh@@\M@@@@-n\rn@:h
-M\h=h\`   rhM\M@@@@@@@@@@@@@@@=@@@@@@@@@@@@@@MhM@\hh@M@Mhh@-\MMhrr\\\:MMh::\\-\
h@hhh\h`  `rMh\M@@@@@@@@@@@@@@nr;;;;rn@@@@@@@@r@r///=@\@\r\\hM@nrrr@\n\h\M\\\\\:
:\=hMn@@@=\hhh:M===============;/. ,,==========@r-/--@:@M\\@@@n@Mn:hM@n@-=\hr=-h
:Mrrr=rr==@rr=rrr=rrr=/=r===r==/:; ..===r\\-h==@r-,;-=r/;/;;;;;;rnrrr=rrr=rrr=r;
rrrrrrrr@=rrrrrrrrrrr//r=r=r=r=r;. ,.r=r\---hr=@r===-r=r=;;;r;;;hh@:;;;;;;;;;;-;
,;,:,; ;,;;;-;;;,;/:-rrrrrrrrrrrrrrrrr\-.,;\@rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr

screenshot from the terminal version

Anarch has these features:

Gameplay-wise Anarch offers 10 levels and multiple enemy and weapon types. It supports mouse where available.

Technical Details

Anarch's engine uses raycastlib, a LRS library for advanced 2D ray casting which is often called a "pseudo 3D". This method was used by Wolf3D, but Anarch improves it to allow different levels of floor and ceiling which makes it look a little closer to Doom (which however used a different methods called BSP rendering).

The music in the game is procedurally generated using bytebeat.

All images in the game (textures, sprites, ...) are 32x32 pixels, compressed by using a 16 color subpalette of the main 256 color palette, and are stored in source code itself as simple arrays of bytes -- this eliminates the need for using files and allows the game to run on platforms without a file system.

The game uses a tiny custom-made 4x4 bitmap font to render texts.

Saving/loading is optional, in case a platform doesn't have persistent storage. Without saving all levels are simply available from the start.

In the suckless fashion, mods are recommended to be made and distributed as patches.


"Anarcho" Capitali$m

Not to be confused with anarchism.

So called "anarcho capitalism" (ancap for short, not to be confused with anpac or any form of anarchism) is probably the worst, most retarded and most dangerous idea in the history of ever, and that is the idea of supporting capitalism absolutely unrestricted by a state or anything else. No one with at least 10 brain cells and/or anyone who has spent at least 3 seconds observing the world could come up with such a stupid, stupid idea. We, of course, completely reject this shit.

It has to be noted that "anarcho capitalism" is not real anarchism, despite its name. Great majority of anarchists strictly reject this ideology as any form of capitalism is completely incompatible with anarchism -- anarchism is defined as opposing any social hierarchy and oppression, while capitalism is almost purely based on many types of hierarchies (internal corporate hierarchies, hierarchies between companies, hierarchies of social classes of different wealth etc.) and oppression (employee by employer, consumer by corporation etc.). Why do they call it anarcho capitalism then? Well, partly because they're stupid and don't know what they're talking about (otherwise they couldn't come up with such an idea in the first place) and secondly, as any capitalists, they want to deceive and ride on the train of the anarchist brand -- this is not new, Nazis also called themselves socialists despite being the complete opposite.

The colors on their flag are black and yellow (this symbolizes shit and piss).

It is kind of another bullshit kind of "anarchism" just like "anarcha feminism" etc.

The Worst Idea In History

As if capitalism wasn't extremely bad already, "anarcho" capitalists want to get rid of the last mechanisms that are supposed to protect the people from corporations -- states. We, as anarchists ourselves, of course see states as eventually harmful, but they cannot go before we get rid of capitalism first. Why? Well, imagine all the bad things corporations would want to do but can't because there are laws preventing them -- in "anarcho" capitalism they can do them.

Firstly this means anything is allowed, any unethical, unfair business practice, including slavery, physical violence, blackmailing, rape, worst psychological torture, nuclear weapons, anything that makes you the winner in the jungle system. Except that this jungle is not like the old, self-regulating jungle in which you could only reach limited power, this jungle offers, through modern technology, potentially limitless power with instant worldwide communication and surveillance technology, with mass production, genetic engineering, AI and weapons capable of destroying the planet.

Secondly the idea of getting rid of a state in capitalism doesn't even make sense because if we get rid of the state, the strongest corporation will become the state, only with the difference that state is at least supposed to work for the people while a corporation is only by its very definition supposed to care solely about its own endless profit on the detriment of people. Therefore if we scratch the state, McDonalds or Coca Cola or Micro$oft -- whoever is the strongest -- hires a literal army and physically destroys all its competition, then starts ruling the world and making its own laws -- laws that only serve the further growth of that corporation such as that everyone is forced to work 16 hour shifts every day until he falls dead. Don't like it? They kill your whole family, no problem. 100% of civilization will experience the worst kind of suffering, maybe except for the CEO of McDonald's, the world corporation, until the planet's environment is destroyed and everyone hopefully dies, as death is what we'll wish for.

All in all, "anarcho" capitalism is advocated mostly by children who don't know a tiny bit about anything, by children who are being brainwashed daily in schools by capitalist propaganda, with no education besides an endless stream of ads from their smartphones, or capability of thinking on their own. However, these children are who will run the world soon. It is sad, it's not really their fault, but through them the system will probably come into existence. Sadly "anarcho" capitalism is already a real danger and a very likely future. It will likely be the beginning of our civilization's greatest agony. We don't know what to do against it other than provide education.

God be with us.


Anarcho Pacifism

Anarcho pacifism (anpac) is a form of anarchism that completely rejects any violence. Anarcho pacifists argue that since anarchism opposes hierarchy and oppression, we have to reject violence which is a tool of oppression and establishing hierarchy. This would make it the one true purest form of anarchism. Anarcho pacifists use a black and white flag.

Historically anarcho pacifists such as Leo Tolstoy were usually religiously motivated for rejecting violence, however this stance may also come from logic and other than religious beliefs, e.g. the simple belief that violence will only spawn more violence ("eye for an eye will only make the whole world blind"), or pure unconditional love of life.

We, LRS, advocate anarcho pacifism. We see how violence can be a short term solution, even to preventing a harm of many, however from the long term perspective we only see the complete delegitimisation of violence as leading to a truly mature society. We realize a complete, 100% non violent society may be never achieved, but with enough education and work it will be possible to establish a society with absolute minimum of violence, a society in which firstly people grow up in a completely non violent environment so that they never accept violence, and secondly have all needs secured so that they don't even have a reason for using violence. We should at least try to get as close to this ideal as possible.



Antialiasing (AA) means preventing aliasing, i.e. distortion of signal (images, audio, video, ...) caused by discrete sampling. Most people think antialiasing stands for "smooth edges in video game graphics", however that's a completely inaccurate understanding of antialiasing: yes, one of the most noticeable effects of 3D graphics antialiasing for a common human is that of having smooth edges, but smooth edges are not the primary goal, they are not the only effect and they are not even the most important effect of antialisng. Understanding antialiasing requires understanding what aliasing is, which is not a completely trivial thing to do (it's not the most difficult thing in the world either, but most people are just afraid of mathematics, so they prefer to stick with "antialiasing = smooth edges" simplification).

The basic sum up is following: aliasing is a negative effect which may arise when we try to sample (capture) continuous signals potentially containing high frequencies (the kind of "infinitely complex" data we encounter in real world such as images or sounds) in discrete (non-continuous) ways by capturing the signal values at specific points in time (as opposed to capturing integrals of intervals), i.e. in ways native and natural to computers. Note that the aliasing effect is mathematical and is kind of a "punishment" for our "cheating" which we do by trying to simplify capturing of very complex signals, i.e. aliasing has nothing to do with noise or recording equipment imperfections, and it may occur not only when recording real world data but also when simulating real world, for example during 3D graphics rendering (which simulates capturing real world with a camera). A typical example of such aliasing effect is a video of car wheels rotating very fast (with high frequency) with a relatively low FPS camera, which then seem to be rotating very slowly and in opposite direction -- a high frequency signal (fast rotating wheels) caused a distortion (illusion of wheels rotating slowly in opposite direction) due to simplified discrete sampling (recording video as a series of photographs taken at specific points in time in relatively low FPS). Similar undesirable effects may appear e.g. on high resolution textures when they're scaled down on a computer screen (so called Moiré effect), but also in sound or any other data. Antialiasing exploits the mathematical Nyquist–Shannon sampling theorem that says that aliasing cannot occur when the sampling frequency is high enough relatively to the highest frequency in the sampled data, i.e. antialising tries to prevent aliasing effects typically by either preventing high frequency from appearing in the sampled data (e.g. blurring textures, see MIP mapping) or by increasing the sampling frequency (e.g. multisampling). As a side effect of better sampling we also get things such as smoothly rendered edges etc.

Note that the word anti in antialising means that some methods may not prevent aliasing completely, they may just try to suppress it somehow. For example the FXAA (fast approximate antialiasing) method is a postprocessing algorithm which takes an already rendered image and tries to make it as if it was properly rendered in ways preventing aliasing, however it cannot be 100% successful as it doesn't know the original signal, all it can do is try to give us a good enough approximation.

How to do antialiasing? There are many ways, depending on the kind of data (e.g. the number of dimensions of the signal or what frequencies you expect in it) or required quality (whether you want to prevent aliasing completely or just suppress it). As stated above, most methods make use of the Nyquist–Shannon sampling theorem which states that aliasing cannot occur if the sampling frequency is at least twice as high as the highest frequency in the sampled signal. I.e. if you can make sure your sampling frequency is high enough relatively to the highest frequency in the signal, you will completely prevent aliasing -- you can do this by either processing the input signal with a low pass filter (e.g. blurring an image) or by increasing your sampling frequency (e.g. rendering at higher resolution). Some specific antialiasing methods include:


Antivirus Paradox

{ I think this paradox must have had another established name even before antiviruses, but I wasn't able to find anything. If you know it, let me know. ~drummyfish }

Antivirus paradox is the paradox of someone who's job it is to eliminate certain undesirable phenomenon actually having an interest in keeping this phenomenon existing so as to keep his job. A typical example is an antivirus company having an interest in the existence of dangerous viruses and malware so as to keep their business running; in fact antivirus companies themselves secretly create and release viruses and malware.

Cases of this behavior are common, e.g. the bind-torture-kill serial killer used to work as a seller of home security alarms who installed alarms for people who were afraid about being invaded by the bind-torture-killer, and then used his knowledge of the alarms to break into the houses -- a typical capitalist business. It is also a known phenomenon that many firefighters are passionate arsonists because society simply rewards them for fighting fires (as opposed to rewarding them for the lack of fires).

In capitalism and similar systems requiring people to have jobs this paradox prevents progress, i.e. actual elimination of undesirable phenomena, hence capitalism and similar systems are anti-progress. And not only that, the system pressures people to artificially creating new undesirable phenomena (e.g. lack of women in tech and similar bullshit) just to create new bullshit jobs that "fight" this phenomena. In a truly good society where people are not required to have jobs and in which people aim to eliminate work this paradox largely disappears.



Apple is a terrorist organization and one of the biggest American computer fashion corporations, infamously founded by Steve Job$, it creates and sells overpriced, abusive, highly consumerist proprietary electronic devices.

Take a look e.g. at Apple's Dark Side at Techrights.



App is a retarded capitalist name for application; it is used by soydevs, corporations and normalfaggots (similarly to how "coding" is used for programming). This word is absolutely unacceptable and is only to be used to mock these retards.

Anything called an "app" is expected to be bloat, badly designed and, at best, of low quality (and, at worst, malicious).



Approximating means calculating or representing something with lesser than best possible precision -- estimating -- purposefully allowing some margin of error in results and using simpler mathematical models than the most accurate ones: this is typically done in order to save resources (CPU cycles, memory etc.) and reduce complexity so that our projects and analysis stay manageable. Simulating real world on a computer is always an approximation as we cannot capture the infinitely complex and fine nature of the real world with a machine of limited resources, but even withing this we need to consider how much, in what ways and where to simplify.

Using approximations however doesn't have to imply decrease in precision of the final result -- approximations very well serve optimization. E.g. approximate metrics help in heuristic algorithms such as A*. Another use of approximations in optimization is as a quick preliminary check for the expensive precise algorithms: e.g. using bounding spheres helps speed up collision detection (if bounding spheres of two objects don't collide, we know they can't possibly collide and don't have to expensively check this).

Example of approximations:


Arch Linux

"BTW I use Arch"

Arch Linux is a rolling-release Linux distribution for the "tech-savvy", mostly fedora-wearing weirdos.

Arch is shit at least for two reasons: it has proprietary packages (such as discord) and it uses systemd. Artix Linux is a fork of Arch without systemd.



Art is an endeavor that seeks discovery and creation of beauty and primarily relies on intuition. While the most immediate examples of art that come to mind are for example music and painting, even the most scientific and rigorous effort like math and programming becomes art when pushed to the highest level, to the boundaries of current knowledge where intuition becomes important for further development.

See Also



ASCII art is the art of manually creating graphics and images only out of fixed-width ASCII characters. This means no unicode or extended ASCII characters are allowed, of course. ASCII art is also, strictly speaking, separate from mere ASCII rendering, i.e. automatically rendering a bitmap image with ASCII characters in place of pixels, and ASCII graphics that utilizes the same techniques as ASCII art but can't really be called art (e.g. computer generated diagrams). Pure ASCII art should make no use of color.

This kind of art used to be a great part of the culture of earliest Internet communities for a number of reasons imposed largely by the limitations of old computers -- it could be created easily with a text editor and saved in pure text format, it didn't take much space to store or send over a network and it could be displayed on text-only displays and terminals. The principle itself predates computers, people were already making this kind of images with type writers. Nevertheless the art survives even to present day and lives on in the hacker culture, in Unix communities, on the Smol Internet etc. ASCII diagram may very well be embedded e.g. in a comment in a source code to explain some spatial concept -- that's pretty KISS. We, LRS, highly advocate use of ASCII art whenever it's good enough.

Here is a simple 16-shade ASCII palette (but watch out, whether it works will depend on your font): #OVaxsflc/!;,.- . Another one can be e.g.: WM0KXkxocl;:,'. .

           /    ';_  
    .     (  0 _/  "-._
    |\     \_ /_==-"""'
    | |:---'   (
     \ \__."    ) Steamer
      '--_ __--'    Duck!

      []  [][][][][]
      [][][]      [][]
      [][]          []
      []    XX    XX[]
      []      XXXX  []
      [][]          []
      [][][]      [][]
      []  [][][][][]
          SAF FTW
|   _.--._                  _.--._
| .'      '.              .'      '.
|            \          /            \
|             '.      .'              '.
|               `'--'`                  `'-

See Also



ASCII (American standard code for information interchange) is a relatively simple standard for digital encoding of text that's one of the most basic and probably the most common format used for this purpose. For its simplicity and inability to represent characters of less common alphabets it is nowadays quite often replaced with more complex encodings such as UTF-8 who are however almost always backwards compatible with ASCII (interpreting UTF-8 as ASCII will give somewhat workable results), and ASCII itself is also normally supported everywhere. ASCII is the suckless/LRS/KISS character encoding, recommended and good enough for most programs.

The ASCII standard assigns a 7 bit code to each basic text character which gives it a room for 128 characters -- these include lowercase and uppercase English alphabet, decimal digits, other symbols such as a question mark, comma or brackets, plus a few special control characters that represent instructions such as carriage return which are however often obsolete nowadays. Due to most computers working with 8 bit bytes, most platforms store ASCII text with 1 byte per character; the extra bit creates a room for extending ASCII by another 128 characters (or creating a variable width encoding such as UTF-8). These extensions include unofficial ones such as VISCII (ASCII with additional Vietnamese characters) and more official ones, most notably ISO 8859: a group of standards by ISO for various languages, e.g. ISO 88592-1 for western European languages, ISO 8859-5 for Cyrillic languages etc.

The ordering of characters has been kind of cleverly designed to make working with the encoding easier, for example digits start with 011 and the rest of the bits correspond to the digit itself (0000 is 0, 0001 is 1 etc.). Corresponding upper and lower case letters only differ in the 6th bit, so you can easily convert between upper and lower case by negating it as letter ^ 0x20. { I think there is a few missed opportunities though, e.g. in not putting digits right before letters. That way it would be very easy to print hexadecimal (and all bases up to a lot) simply as putchar('0' + x). ~drummyfish }

ASCII was approved as an ANSI standard in 1963 and since then underwent many revisions every few years. The current one is summed up by the following table:

dec hex oct bin symbol
000 00 000 0000000 NUL: null
001 01 001 0000001 SOH: start of heading
002 02 002 0000010 STX: start of text
003 03 003 0000011 ETX: end of text
004 04 004 0000100 EOT: end of stream
005 05 005 0000101 ENQ: enquiry
006 06 006 0000110 ACK: acknowledge
007 07 007 0000111 BEL: bell
008 08 010 0001000 BS: backspace
009 09 011 0001001 TAB: tab (horizontal)
010 0a 012 0001010 LF: new line
011 0b 013 0001011 VT: tab (vertical)
012 0c 014 0001100 FF: new page
013 0d 015 0001101 CR: carriage return
014 0e 016 0001110 SO: shift out
015 0f 017 0001111 SI: shift in
016 10 020 0010000 DLE: data link escape
017 11 021 0010001 DC1: device control 1
018 12 022 0010010 DC2: device control 2
019 13 023 0010011 DC3: device control 3
020 14 024 0010100 DC4: device control 4
021 15 025 0010101 NAK: not acknowledge
022 16 026 0010110 SYN: sync idle
023 17 027 0010111 ETB: end of block
024 18 030 0011000 CAN: cancel
025 19 031 0011001 EM: end of medium
026 1a 032 0011010 SUB: substitute
027 1b 033 0011011 ESC: escape
028 1c 034 0011100 FS: file separator
029 1d 035 0011101 GS: group separator
030 1e 036 0011110 RS: record separator
031 1f 037 0011111 US: unit separator
032 20 040 0100000 : space
033 21 041 0100001 !
034 22 042 0100010 "
035 23 043 0100011 #
036 24 044 0100100 $
037 25 045 0100101 %
038 26 046 0100110 &
039 27 047 0100111 '
040 28 050 0101000 (
041 29 051 0101001 )
042 2a 052 0101010 *
043 2b 053 0101011 +
044 2c 054 0101100 ,
045 2d 055 0101101 -
046 2e 056 0101110 .
047 2f 057 0101111 /
048 30 060 0110000 0
049 31 061 0110001 1
050 32 062 0110010 2
051 33 063 0110011 3
052 34 064 0110100 4
053 35 065 0110101 5
054 36 066 0110110 6
055 37 067 0110111 7
056 38 070 0111000 8
057 39 071 0111001 9
058 3a 072 0111010 :
059 3b 073 0111011 ;
060 3c 074 0111100 <
061 3d 075 0111101 =
062 3e 076 0111110 >
063 3f 077 0111111 ?
064 40 100 1000000 @
065 41 101 1000001 A
066 42 102 1000010 B
067 43 103 1000011 C
068 44 104 1000100 D
069 45 105 1000101 E
070 46 106 1000110 F
071 47 107 1000111 G
072 48 110 1001000 H
073 49 111 1001001 I
074 4a 112 1001010 J
075 4b 113 1001011 K
076 4c 114 1001100 L
077 4d 115 1001101 M
078 4e 116 1001110 N
079 4f 117 1001111 O
080 50 120 1010000 P
081 51 121 1010001 Q
082 52 122 1010010 R
083 53 123 1010011 S
084 54 124 1010100 T
085 55 125 1010101 U
086 56 126 1010110 V
087 57 127 1010111 W
088 58 130 1011000 X
089 59 131 1011001 Y
090 5a 132 1011010 Z
091 5b 133 1011011 [
092 5c 134 1011100 \
093 5d 135 1011101 ]
094 5e 136 1011110 ^
095 5f 137 1011111 _
096 60 140 1100000 `: backtick
097 61 141 1100001 a
098 62 142 1100010 b
099 63 143 1100011 c
100 64 144 1100100 d
101 65 145 1100101 e
102 66 146 1100110 f
103 67 147 1100111 g
104 68 150 1101000 h
105 69 151 1101001 i
106 6a 152 1101010 j
107 6b 153 1101011 k
108 6c 154 1101100 l
109 6d 155 1101101 m
110 6e 156 1101110 n
111 6f 157 1101111 o
112 70 160 1110000 p
113 71 161 1110001 q
114 72 162 1110010 r
115 73 163 1110011 s
116 74 164 1110100 t
117 75 165 1110101 u
118 76 166 1110110 v
119 77 167 1110111 w
120 78 170 1111000 x
121 79 171 1111001 y
122 7a 172 1111010 z
123 7b 173 1111011 {
124 7c 174 1111100 `
125 7d 175 1111101 }
126 7e 176 1111110 ~
127 7f 177 1111111 DEL

See Also



Assembly (also ASM) is, for any given hardware computing platform (ISA, basically a CPU architecture), the lowest level programming language that expresses typically a linear, unstructured sequence of CPU instructions -- it maps (mostly) 1:1 to machine code (the actual binary CPU instructions) and basically only differs from the actual machine code by utilizing a more human readable form (it gives human friendly nicknames, or mnemonics, to different combinations of 1s and 0s). Assembly is converted by assembler into the the machine code. Assembly is similar to bytecode, but bytecode is meant to be interpreted or used as an intermediate representation in compilers while assembly represents actual native code run by hardware. In ancient times when there were no higher level languages (like C or Fortran) assembly was used to write computer programs -- nowadays most programmers no longer write in assembly (majority of zoomer "coders" probably never even touch anything close to it) because it's hard (takes a long time) and not portable, however programs written in assembly are known to be extremely fast as the programmer has absolute control over every single instruction (of course that is not to say you can't fuck up and write a slow program in assembly).

Assembly is NOT a single language, it differs for every architecture, i.e. every model of CPU has potentially different architecture, understands a different machine code and hence has a different assembly (though there are some standardized families of assembly like x86 that work on wide range of CPUs); therefore assembly is not portable (i.e. the program won't generally work on a different type of CPU or under a different OS)! And even the same kind of assembly language may have several different syntax formats which may differ in comment style, order of writing arguments and even instruction abbreviations (e.g. x86 can be written in Intel and AT&T syntax). For the reason of non-portability (and also for the fact that "assembly is hard") you shouldn't write your programs directly in assembly but rather in a bit higher level language such as C (which can be compiled to any CPU's assembly). However you should know at least the very basics of programming in assembly as a good programmer will come in contact with it sometimes, for example during hardcore optimization (many languages offer an option to embed inline assembly in specific places), debugging, reverse engineering, when writing a C compiler for a completely new platform or even when designing one's own new platform. You should write at least one program in assembly -- it gives you a great insight into how a computer actually works and you'll get a better idea of how your high level programs translate to machine code (which may help you write better optimized code) and WHY your high level language looks the way it does.

The most common assembly languages you'll encounter nowadays are x86 (used by most desktop CPUs) and ARM (used by most mobile CPUs) -- both are used by proprietary hardware and though an assembly language itself cannot (as of yet) be copyrighted, the associated architectures may be "protected" (restricted) e.g. by patents. RISC-V on the other hand is an "open" alternative, though not yet so wide spread. Other assembly languages include e.g. AVR (8bit CPUs used e.g. by some Arduinos) and PowerPC.

To be precise, a typical assembly language is actually more than a set of nicknames for machine code instructions, it may offer helpers such as macros (something aking the C preprocessor), pseudoinstructions (commands that look like instructions but actually translate to e.g. multiple instructions), comments, directives, named labels for jumps (as writing literal jump addresses would be extremely tedious) etc.

Assembly is extremely low level, so you get no handholding or much programming "safety" (apart from e.g. CPU operation modes), you have to do everything yourself -- you'll be dealing with things such as function call conventions, interrupts, syscalls and their conventions, how many CPU cycles different instructions take, memory segments, endianness, raw addresses/goto jumps, call frames etc.

Note that just replacing assembly mnemonics with binary machine code instructions is not yet enough to make an executable program! More things have to be done such as linking libraries and converting the result to some executable format such as elf which contains things like header with metainformation about the program etc.

Typical Assembly Language

Assembly languages are usually unstructured, i.e. there are no control structures such as if or while statements: these have to be manually implemented using labels and jump (goto) instructions. There may exist macros that mimic control structures. The typical look of an assembly program is however still a single column of instructions with arguments, one per line, each representing one machine instruction.

The working of the language reflects the actual hardware architecture -- most architectures are based on registers so usually there is a small number (something like 16) of registers which may be called something like R0 to R15, or A, B, C etc. Sometimes registers may even be subdivided (e.g. in x86 there is an eax 32bit register and half of it can be used as the ax 16bit register). These registers are the fastest available memory (faster than the main RAM memory) and are used to perform calculations. Some registers are general purpose and some are special: typically there will be e.g. the FLAGS register which holds various 1bit results of performed operations (e.g. overflow, zero result etc.). Some instructions may only work with some registers (e.g. there may be kind of a "pointer" register used to hold addresses along with instructions that work with this register, which is meant to implement arrays). Values can be moved between registers and the main memory (with instructions called something like move, load or store).

Writing instructions works similarly to how you call a function in high level language: you write its name and then its arguments, but in assembly things are more complicated because an instruction may for example only allow certain kinds of arguments -- it may e.g. allow a register and immediate constant (kind of a number literal/constant), but not e.g. two registers. You have to read the documentation for each instruction. While in high level language you may write general expressions as arguments (like myFunc(x + 2 * y,myFunc2())), here you can only pass specific values.

There are also no complex data types, assembly only works with numbers of different size, e.g. 16 bit integer, 32 bit integer etc. Strings are just sequences of numbers representing ASCII values, it is up to you whether you implement null terminated strings or Pascal style strings. Pointers are just numbers representing addresses. It is up to you whether you interpret a number as signed or unsigned (some instructions treat numbers as unsigned, some as signed).

Instructions are typically written as three-letter abbreviations and follow some unwritten naming conventions so that different assembly languages at least look similar. Common instructions found in most assembly languages are for example:

How To

On Unices the objdump utility from GNU binutils can be used to disassemble compiled programs, i.e view the instructions of the program in assembly (other tools like ndisasm can also be used). Use it e.g. as:

objdump -d my_compiled_program

Let's now write a simple Unix program in x86 assembly (AT&T syntax). Write the following source code into a file named e.g. program.s:

.global   _start         # include the symbol in object file

.ascii    "it works\n"   # the string data

_start:                  # execution starts here
  mov     $5,   %rbx     # store loop counter in rbx

  # make a Linux "write" syscall:
                         # args to syscall will be passed in regs.
  mov     $1,   %rax     # says syscalls type (1 = write)
  mov     $1,   %rdi     # says file to write to (1 = stdout)
  mov     $str, %rsi     # says the address of the string to write
  mov     $9,   %rdx     # says how many bytes to write
  syscall                # makes the syscall

  sub     $1,   %rbx     # decrement loop counter
  cmp     $0,   %rbx     # compare it to 0
  jne     .loop          # if not equal, jump to start of the loop

  # make an "exit" syscall to properly terminate:
  mov     $60,  %rax     # says syscall type (60 = exit)
  mov     $0,   %rdi     # says return value (0 = success)
  syscall                # makes the syscall

The program just writes out it works five times: it uses a simple loop and a Unix system call for writing a string to standard output (i.e. it won't work on Windows and similar shit).

Now assembly source code can be manually assembled into executable by running assemblers like as or nasm to obtain the intermediate object file and then linking it with ld, but to assemble the above written code simply we may just use the gcc compiler which does everything for us:

gcc -nostdlib -no-pie -o program program.s

Now we can run the program with


And we should see

it works
it works
it works
it works
it works

As an exercise you can objdump the final executable and see that the output basically matches the original source code. Furthermore try to disassemble some primitive C programs and see how a compiler e.g. makes if statements or functions into assembly.


Let's take the following C code:

#include <stdio.h>

char incrementDigit(char d)
  return // remember this is basically an if statement
    d >= '0' && d < '9' ?
    d + 1 :

int main(void)
  char c = getchar();
  return 0;

We will now compile it to different assembly languages (you can do this e.g. with gcc -S my_program.c). This assembly will be pretty long as it will contain boilerplate and implementations of getchar and putchar from standard library, but we'll only be looking at the assembly corresponding to the above written code. Also note that the generated assembly will probably differ between compilers, their versions, flags such as optimization level etc. The code will be manually commented.

{ I used this online tool: https://godbolt.org. ~drummyfish }

The x86 assembly may look like this:

  pushq   %rbp                   # save base pointer
  movq    %rsp, %rbp             # move base pointer to stack top
  movl    %edi, %eax             # move argument to eax
  movb    %al, -4(%rbp)          # and move it to local var.
  cmpb    $47, -4(%rbp)          # compare it to '0'
  jle     .L2                    # if <=, jump to .L2
  cmpb    $56, -4(%rbp)          # else compare to '9'
  jg      .L2                    # if >, jump to .L4
  movzbl  -4(%rbp), %eax         # else get the argument
  addl    $1, %eax               # add 1 to it
  jmp     .L4                    # jump to .L4
  movl    $63, %eax              # move '?' to eax (return val.)
  popq    %rbp                   # restore base pointer
  pushq   %rbp                   # save base pointer
  movq    %rsp, %rbp             # move base pointer to stack top
  subq    $16, %rsp              # make space on stack
  call    getchar                # push ret. addr. and jump to func.
  movb    %al, -1(%rbp)          # store return val. to local var.
  movsbl  -1(%rbp), %eax         # move with sign extension
  movl    %eax, %edi             # arg. will be passed in edi
  call    incrementDigit
  movsbl  %al, %eax              # sign extend return val.
  movl    %eax, %edi             # pass arg. in edi again
  call    putchar
  movl    $0, %eax               # values are returned in eax

The ARM assembly may look like this:

  sub   sp, sp, #16              // make room on stack
  strb  w0, [sp, 15]             // load argument from w0 to local var.
  ldrb  w0, [sp, 15]             // load back to w0
  cmp   w0, 47                   // compare to '0'
  bls   .L2                      // branch to .L2 if <
  ldrb  w0, [sp, 15]             // load argument again to w0
  cmp   w0, 56                   // compare to '9'
  bhi   .L2                      // branch to .L2 if >=
  ldrb  w0, [sp, 15]             // load argument again to w0
  add   w0, w0, 1                // add 1 to it
  and   w0, w0, 255              // mask out lowest byte
  b     .L3                      // branch to .L3
  mov   w0, 63                   // set w0 (ret. value) to '?'
  add   sp, sp, 16               // shift stack pointer back
  stp   x29, x30, [sp, -32]!     // shift stack and store x regs
  mov   x29, sp
  bl    getchar
  strb  w0, [sp, 31]             // store w0 (ret. val.) to local var. 
  ldrb  w0, [sp, 31]             // load it back to w0
  bl    incrementDigit
  and   w0, w0, 255              // mask out lowest byte
  bl    putchar
  mov   w0, 0                    // set ret. val. to 0
  ldp   x29, x30, [sp], 32       // restore x regs

The RISC-V assembly may look like this:

  addi    sp,sp,-32              # shift stack (make room)
  sw      s0,28(sp)              # save frame pointer
  addi    s0,sp,32               # shift frame pointer
  mv      a5,a0                  # get arg. from a0 to a5
  sb      a5,-17(s0)             # save to to local var.
  lbu     a4,-17(s0)             # get it to a4
  li      a5,47                  # load '0' to a4
  bleu    a4,a5,.L2              # branch to .L2 if a4 <= a5
  lbu     a4,-17(s0)             # load arg. again
  li      a5,56                  # load '9' to a5
  bgtu    a4,a5,.L2              # branch to .L2 if a4 > a5
  lbu     a5,-17(s0)             # load arg. again
  addi    a5,a5,1                # add 1 to it
  andi    a5,a5,0xff             # mask out the lowest byte
  j       .L3                    # jump to .L3
  li      a5,63                  # load '?'
  mv      a0,a5                  # move result to ret. val.
  lw      s0,28(sp)              # restore frame pointer
  addi    sp,sp,32               # pop stack
  jr      ra                     # jump to addr in ra
  addi    sp,sp,-32              # shift stack (make room)
  sw      ra,28(sp)              # store ret. addr on stack
  sw      s0,24(sp)              # store stack frame pointer on stack
  addi    s0,sp,32               # shift frame pointer
  call    getchar
  mv      a5,a0                  # copy return val. to a5
  sb      a5,-17(s0)             # move a5 to local var
  lbu     a5,-17(s0)             # load it again to a5
  mv      a0,a5                  # move it to a0 (func. arg.)
  call    incrementDigit
  mv      a5,a0                  # copy return val. to a5
  mv      a0,a5                  # get it back to a0 (func. arg.)
  call    putchar
  li      a5,0                   # load 0 to a5
  mv      a0,a5                  # move it to a0 (ret. val.)
  lw      ra,28(sp)              # restore return addr.
  lw      s0,24(sp)              # restore frame pointer
  addi    sp,sp,32               # pop stack
  jr      ra                     # jump to addr in ra



Assertiveness is an euphemism for being a dick.


Arcus Tangent

Arcus tangent, written as atan or tan^-1, is the inverse function to the tangent function. For given argument x (any real number) it returns a number y (from -pi/2 to pi/2) such that tan(y) = x.

Approximation: Near 0 atan(x) can very rougly be approximated simply by x. For a large argument atan(x) can be approximated by pi/2 - 1/x (as atan's limit is pi/2). The following formula { created by me ~drummyfish } approximates atan with a poylnomial for non-negative argument with error smaller than 2%:

atan(x) ~= (x * (2.96088 + 4.9348 * x))/(3.2 + 3.88496 * x + pi * x^2)

            | y
       pi/2 +                  
            |       _..---''''''
            |   _.''
            | .'
-----------.+'-+--+--+--+--+--> x
        _.' |0 1  2  3  4  5
     _-'    |
.--''       |
      -pi/2 +

plot of atan(x)



"In this moment I am euphoric ..." --some retarded atheist

An atheist is someone who doesn't believe in god or any other similar supernatural beings.

An especially annoying kind is the reddit atheist who will DESTROY YOU WITH FACTS AND LOGIC^(TM). These atheists are 14 year old children who think they've discovered the secret of the universe and have to let the whole world know they're atheists who will destroy you with their 200 IQ logic and knowledge of all 10 cognitive biases and argument fallacies, while in fact they reside at the mount stupid and many times involuntarily appear on other subreddits such as r/iamverysmart and r/cringe. They masturbate to Richard Dawkins, love to read soyentific studiiiiiies about how race has no biological meaning and think that religion is literally Hitler. They like to pick easy targets such as flatearthers and cyberbully them on YouTube with the power of SCIENCE and their enormously large thesaurus (they will never use a word that's among the 100000 most common English words). They are so cringe you want to kill yourself, but their discussions are sometimes entertaining to read with a bowl of popcorn.

Such a specimen of atheist is one of the best quality examples of a pseudosceptic. See also this: https://www.debunkingskeptics.com/Contents.htm.

On a bit more serious note: we've all been there, most people in their teens think they're literal Einsteins and then later in life cringe back on themselves. However, some don't grow out of it and stay arrogant, ignorant fucks for their whole lives. The principal mistake of the stance they retain is they try to apply "science" (or whatever it means in their world) to EVERYTHING and reject any other approach to solving problems -- of course, science (the real one) is great, but it's just a tool, and just like you can't fix every problem with a hammer, you can't approach every problem with science. In your daily life you make a million of unscientific decisions and it would be bad to try to apply science to them; you cross the street not because you've read a peer-reviewed paper about it being the most scientifically correct thing to do, but because you feel like doing it, because you believe the drivers will stop and won't run you over. Beliefs, intuition, emotion, non-rationality and even spirituality are and have to be part of life, and it's extremely stupid to oppose these concepts just out of principle. With that said, there's nothing wrong about being a well behaved man who just doesn't feel a belief in any god in his heart, just you know, don't be an idiot.

Among the greatest minds it is hard to find true atheists, even though they typically have a personal and not easy to describe faith. Newton was a Christian. Einstein often used the word "God" instead of "nature" or "universe"; even though he said he didn't believe in the traditional personal God, he also said that the laws of physics were like books in a library which must have obviously been written by someone or something we can't comprehend. Nikola Tesla said he was "deeply religious, though not in the orthodox sense". There are also very hardcore religious people such as Larry Wall, the inventor of Perl language, who even planned to be a Christian missionary. The "true atheists" are mostly second grade "scientists" who make career out of the pose and make living by writing books about atheism rather than being scientists.

See Also



Audiophilia is a mental disorder, similar to other diseases such as distrohopping and chronic ricing, that makes one scared of low or normal quality audio. Audiophiles are scared of lossy compression and so harm society by wasting storage space. Audiophilia, similarly to e.g. the business with mechanical keyboards, is the astrology of technology, it is an arbitrarily invented bullshit business creating an artificial need that makes people wanna buy golden cables and similar shit in belief that it will make their life happier, perpetuation consumerism and capitalism.



Autoupdate is a malicious software feature that frequently remotely modifies software on the user's device without asking, sometimes silently and many times in a forced manner without the possibility to refuse this modification (typically in proprietary software). This is a manifestation of update culture. These remote software modifications are called "updates" to make the user think they are a good thing, but in fact they usually introduce more bugs, bloat, security vulnerabilities, annoyance (forced reboots etc.) and malware (even in "open source", see e.g. the many projects on GitHub that introduced intentional malware targeted at Russian users during the Russia-Ukraine war).


Avoidant Personality Disorder


In many cases avoiding the problem really is the objectively best solution.



{ Dunno if this is completely correct, I'm learning this as I'm writing it. There may be errors. ~drummyfish }

Backpropagation, or backprop, is an algorithm, based on the chain rule of derivation, used in training neural networks; it computes the partial derivative (or gradient) of the function of the network's error so that we can perform a gradient descent, i.e. update the weights towards lowering the network's error. It computes the analytical derivative (theoretically you could estimate a derivative numerically, but that's not so accurate and can be too computationally expensive). Backpropagation is one of the most common methods for training neural networks but it is NOT the only possible one -- there are many more such as evolutionary programming. It is called backpropagation because it works backwards and propagates the error from the output towards the input, due to how the chain rule works, and it's efficient by reusing already computed values.


Consider the following neural network:

     w000     w100
    \    /  \    /  \
     \  /    \  /    \
      \/w010  \/w11O  \_E
      /\w001  /\w1O1  /
     /  \    /  \    /
    /    \  /    \  /
     w011     w111

It has an input layer (neurons x0, x1), a hidden layer (neurons y0, y1) and an output layer (neurons z0, z1). For simplicity there are no biases (biases can easily be added as input neurons that are always on). At the end there is a total error E computed from the networks's output against the desired output (training data).

Let's say the total error is computed as the squared error: E = squared_error(z0) + squared_error(z1) = 1/2 * (z0 - z0_desired)^2 + 1/2 * (z1 - z1_desired)^2.

We can see each non-input neuron as a function. E.g. the neuron z0 is a function z0(x) = z0(a(z0s(x))) where:

If you don't know what the fuck is going on see neural networks first.

What is our goal now? To find the partial derivative of the whole network's total error function (at the current point defined by the weights), or in other words the gradient at the current point. I.e. from the point of view of the total error (which is just a number output by this system), the network is a function of 8 variables (weights w000, w001, ...) and we want to find a derivative of this function in respect to each of these variables (that's what a partial derivative is) at the current point (i.e. with current values of the weights). This will, for each of these variables, tell us how much (at what rate and in which direction) the total error changes if we change that variable by certain amount. Why do we need to know this? So that we can do a gradient descent, i.e. this information is kind of a direction in which we want to move (change the weights and biases) towards lowering the total error (making the network compute results which are closer to the training data). So all in all the goal is to find derivatives (just numbers, slopes) with respect to w000, w001, w010, ... w111.

Could we do this without backpropagation? Yes -- we can use numerical algorithms to estimate derivatives, the simplest one would be to just try to change each weight, one by one, by some small number, let's say dw, and see how much such change changes the output error. I.e. we would sample the error function in all directions which could give us an idea of the slope in each direction. However this would be pretty slow, we would have to reevaluate the whole neural network as many times as there are weights. Backpropagation can do this much more efficiently.

Backpropagation is based on the chain rule, a rule of derivation that equates the derivative of a function composition (functions inside other functions) to a product of derivatives. This is important because by converting the derivatives to a product we will be able to reuse the individual factors and so compute very efficiently and quickly.

Let's write derivative of f(x) with respect to x as D{f(x),x}. The chain rule says that:

D{f(g(x)),x} = D{f(g(x)),g(x)} * D{g(x),x}

Notice that this can be applied to any number of composed functions, the product chain just becomes longer.

Let's get to the computation. Backpropagation work by going "backwards" from the output towards the input. So, let's start by computing the derivative against the weight w100. It will be a specific number; let's call it 'w100. Derivative of a sum is equal to the sum of derivatives:

'w100 = D{E,w100} = D{squared_error(z0),w100} + D{squared_error(z0),w100} = D{squared_error(z0),w100} + 0

(The second part of this sum became 0 because with respect to w100 it is a constant.)

Now we can continue and utilize the chain rule:

'w100 = D{E,w100} = D{squared_error(z0),w100} = D{squared_error(z0(a(z0s))),w100} = D(squared_error(z0),z0) * D{a(z0s),z0s} * d{z0s,w100}

We'll now skip the intermediate steps, they should be easy if you can do derivatives. The final results is:

'w100 = (z0_desired - z0) * (z0s * (1 - z0s)) * y0

Now we have computed the derivative against w100. In the same way can compute 'w101, 'w110 and 'w111 (weights leading to the output layer).

Now let's compute the derivative in respect to w000, i.e. the number 'w000. We will proceed similarly but the computation will be different because the weight w000 affects both output neurons ('z0' and 'z1'). Again, we'll use the chain rule.

w000 = D{E,w000} = D(E,y0) * D{a(y0s),y0s} * D{y0s,w000}

D(E,y0) = D{squared_error(z0),y0} + D{squared_error(z1),y0}

Let's compute the first part of the sum:

D{squared_error(z0),y0} = D{squared_error(z0),z0s} * D{squared_error(z0s),y0}

D{squared_error(z0),z0s} = D{squared_error(z0),z0} * D{a(z0s)),z0s}

Note that this last equation uses already computed values which we can reuse. Finally:

D{squared_error(z0s),y0} = D{squared_error(w100 * y0 + w110 * y1),y0} = w100

And we get:

D{squared_error(z0),y0} = D{squared_error(z0),z0} * D{a(z0s)),z0s} * w100

And so on until we get all the derivatives.

Once we have them, we multiply them all by some value (learning rate, a distance by which we move in the computed direction) and subtract them from the current weights by which we perform the gradient descent and lower the total error.

Note that here we've only used one training sample, i.e. the error E was computed from the network against a single desired output. If more example are used in a single update step, they are usually somehow averaged.



{ I am too young to remember this shit so I'm just writing what I've read on the web. ~drummyfish }

Bulletin board system (BBS) is, or rather used to be, a kind of server that hosts a community of users who connect to it via terminal, who exchange messages, files, play games and otherwise interact -- BBSes were mainly popular before the invention of web, i.e. from about 1978 to mid 1990s, however some still exist today. BBSes are powered by special BBS software and the people who run them are called sysops.

Back then people connected to BBSes via dial-up modems and connecting was much more complicated than connecting to a server today: you had to literally dial the number of the BBS and you could only connect if the BBS had a free line. Early BBSes weren't normally connected through Internet but rather through other networks like UUCP working through phone lines. I.e. a BBS would have a certain number of modems that defined how many people could connect at once. It was also expensive to make calls into other countries so BBSes were more of a local thing, people would connect to their local BBSes. Furthermore these things ran often on non-multitasking systems like DOS so allowing multiple users meant the need for having multiple computers. The boomers who used BBSes talk about great adventure and a sense of intimacy, connecting to a BBS meant the sysop would see you connecting, he might start chatting with you etc. Nowadays the few existing BBSes use protocols such as telnet, nevertheless there are apparently about 20 known dial-up ones in north America. Some BBSes evolved into more modern communities based e.g. on public access Unix systems -- for example SDF.

A BBS was usually focused on a certain topic such as technology, fantasy roleplay, dating, warez etc., they would typically greet the users with a custom themed ANSI art welcome page upon login -- it was pretty cool.

{ There's some documentary on BBS that's upposed to give you an insight into this shit, called literally BBS: The documentary. It's about 5 hours long tho. ~drummyfish }

The first BBS was CBBS (computerized bulletin board system) created by Ward Christensen and Randy Suess in 1978 during a blizzard storm -- it was pretty primitive, e.g. it only allowed one user to be connected at the time. After publication of their invention, BBSes became quite popular and the number of them grew to many thousands -- later there was even a magazine solely focused on BBSes (BBS Magazine). BBSes would later group into larger networks that allowed e.g. interchange of mail. The biggest such network was FidoNet which at its peak hosted about 35000 nodes.

{ Found some list of BBSes at http://www.synchro.net/sbbslist.html. ~drummyfish }

See Also



Beauty is the quality of being extremely appealing and pleasing. In technology, engineering, mathematics and other science beauty is, despite it's relative vagueness and subjectivity, an important aspect of design, and in fact this "mathematical beauty" has lots of times some clearly defined shapes -- for example simplicity is mostly considered beautiful. Beauty is similar to and many times synonymous with elegance.

Beauty can perhaps be seen as a heuristic, a touch of intuition that guides the expert in exploration of previously unknown fields, as we have come to learn that the greatest discoveries tend to be very beautiful (however there is also an opposite side: some people, such as Sabine Hossenfelder, criticize e.g. the pursuit of beautiful theories in modern physics as this approach seems to be have led to stagnation). Indeed, beginners and noobs are mostly concerned with learning hard facts, learning standards and getting familiar with already known ways of solving known problems, they often aren't able to recognize what's beautiful and what's ugly. But as one gets more and more experienced and finds himself near the borders of current knowledge, there is suddenly no guidance but intuition, beauty, to suggest ways forward, and here one starts to get the feel for beauty. At this point the field, even if highly exact and rigorous, has become an art.

What is beautiful then? As stated, there is a lot of subjectivity, but generally the following attributes are correlated with beauty:

Examples of beautiful things include:


Bilinear Interpolation

Bilinear interpolation (also bilinear filtering) is a simple way of creating a smooth transition (interpolation) between discrete samples (values) in 2D, it is a generalization of linear interpolation to 2 dimensions. It is used in many places, popularly e.g. in 3D computer graphics for texture filtering; bilinear interpolation allows to upscale textures to higher resolutions (i.e. compute new pixels between existing pixels) while keeping their look smooth and "non-blocky" (even though blurry). On the scale of quality vs simplicity it is kind of a middle way between a simpler nearest neighbour interpolation (which creates the "blocky" look) and more complex bicubic interpolation (which uses yet smoother curves but also requires more samples). Bilinear interpolation can further be generalized to trilinear interpolation (in computer graphics trilinear interpolation is used to also additionally interpolate between different levels of a texture's mipamap) and perhaps even bilinear extrapolation. Many frameworks/libraries/engines have bilinear filtering built-in (e.g. GL_LINEAR in OpenGL).


The above image is constructed by applying bilinear interpolation to the four corner values.

The principle is simple: first linearly interpolate in one direction (e.g. horizontal), then in the other (vertical). Mathematically the order in which we take the dimensions doesn't matter (but it may matter practically due to rounding errors etc.).

Example: let's say we want to compute the value x between the four following given corner values:

1 . . . . . . 5
. . . . . . . .
. . . . . . . . 
. . . . . . . .
. . . . . . . .
. . . . x . . .
. . . . . . . .
8 . . . . . . 3

Let's say we first interpolate horizontally: we'll compute one value, a, on the top (between 1 and 5) and one value, b, at the bottom (between 8 and 3). When computing a we interpolate between 1 and 5 by the horizontal position of x (4/7), so we get a = 1 + 4/7 * (5 - 1) = 23/7. Similartly b = 8 + 4/7 * (3 - 8) = 36/7. Now we interpolate between a and b vertically (by the vertical position of x, 5/7) to get the final value x = 23/7 + 5/7 * (36/7 - 23/7) = 226/49 ~= 4.6. If we first interpolate vertically and then horizontally, we'd get the same result (the value between 1 and 8 would be 6, the value between 5 and 3 would be 25/7 and the final value 226/49 again).

Here is a C code to compute all the inbetween values in the above, using fixed point (no float):

#include <stdio.h>


int interpolateLinear(int a, int b, int t)
  return a + (t * (b - a)) / (GRID_RESOLUTION - 1);

int interpolateBilinear(int topLeft, int topRight, int bottomLeft, int bottomRight,
  int x, int y)
#define FPP 16 // we'll use fixed point to prevent rounding errors
#if 1 // switch between the two versions, should give same results:
  // horizontal first, then vertical
  int a = interpolateLinear(topLeft * FPP,topRight * FPP,x);
  int b = interpolateLinear(bottomLeft * FPP,bottomRight * FPP,x);
  return interpolateLinear(a,b,y) / FPP;
  // vertical first, then horizontal
  int a = interpolateLinear(topLeft * FPP,bottomLeft * FPP,y);
  int b = interpolateLinear(topRight * FPP,bottomRight * FPP,y);
  return interpolateLinear(a,b,x) / FPP;

int main(void)
  for (int y = 0; y < GRID_RESOLUTION; ++y)
    for (int x = 0; x < GRID_RESOLUTION; ++x)
      printf("%d ",interpolateBilinear(1,5,8,3,x,y));

  return 0;

The program outputs:

1 1 2 2 3 3 4 5 
2 2 2 3 3 4 4 5 
3 3 3 3 4 4 4 5 
4 4 4 4 4 4 4 5 
5 5 5 5 5 5 5 4 
6 6 6 6 5 5 5 4 
7 7 7 6 6 5 5 4 
8 8 7 6 6 5 4 3



In 3D computer graphics billboard is a flat image placed in the scene that rotates so that it's always facing the camera. Billboards used to be greatly utilized instead of actual 3D models in old games thanks to being faster to render (and possibly also easier to create than full 3D models), but we can still encounter them even today and even outside retro games, e.g. particle systems are normally rendered with billboards (each particle is one billboard). Billboards are also commonly called sprites, even though that's not exactly accurate.

There are two main types of billboards:

Some billboards also choose their image based on from what angle they're viewed (e.g. an enemy in a game viewed from the front will use a different image than when viewed from the side, as seen e.g. in Doom). Also some billboards intentionally don't scale and keep the same size on the screen, for example health bars in some games.

In older software billboards were implemented simply as image blitting, i.e. the billboard's scaled image would literally be copied to the screen at the appropriate position (this would implement the freely rotating billboard). Nowadays when rendering 3D models is no longer really considered harmful to performance and drawing pixels directly is less convenient, billboards are more and more implemented as so called textured quads, i.e. they are really a flat square 3D model that may pass the same pipeline as other 3D models (even though in some frameworks they may actually have different vertex shaders etc.) and that's simply rotated to face the camera in each frame (in modern frameworks there are specific functions for this).

Fun fact: in the old games such as Doom the billboard images were made from photographs of actual physical models from clay. It was easier and better looking than using the primitive 3D software that existed back then.

Implementation Details

The following are some possibly useful things for implementing billboards.

The billboard's position on the screen can be computed by projecting its center point in world coordinates with modelview and projection matrices, just as we project vertices of 3D models.

The billboard's size on the screen shall due to perspective be multiplied by 1 / (tan(FOV / 2) * z) where FOV is the camera's field of view and z is the billboard's distance from camera's projection plane (which is NOT equal to the mere distance from the camera's position, that would create a fisheye lens effect -- the distance from the projection plane can be obtained from the above mentioned projection matrix). (If the camera's FOV is different in horizontal and vertical directions, then also the billboard's size will change differently in these directions.)

For billboards whose images depends on viewing angle we naturally need to compute the angle. We may do this either in 2D or 3D -- most games resort to the simpler 2D case (only considering viewing angle in a single plane parallel to the floor), in which case we may simply use the combination of dot product and cross product between the normalized billboard's direction vector and a normalized vector pointing from the billboard's position towards the camera's position (dot product gives the cosine of the angle, the sign of cross product's vertical component will give the rest of the information needed for determining the exact angle). Once we have the angle, we quantize (divide) it, i.e. drop its precision depending on how many directional images we have, and then e.g. with a switch statement pick the correct image to display. For the 3D case (possible different images from different 3D positions) we may first transform the sprite's 3D facing vector to camera space with appropriate matrix, just like we transform 3D models, then this transformed vector will (again after quantization) directly determine the image we should use.

When implementing the free rotating billboard as a 3D quad that's aligning with the camera projection plane, we can construct the model matrix for the rotation from the camera's normalized directional vectors: R is camera's right vector, U is its up vector and F is its forward vector. The matrix simply transforms the quad's vertices to the coordinate system with bases R, U and F, i.e. rotates the quad in the same way as the camera. When using row vectors, the matrix is following:

R.x R.y R.z 0
U.x U.y U.z 0
F.x F.y F.z 0
0   0   0   1


Bill Gate$

William "Bill" Gates (28.10.1955 -- TODO) is a mass murderer and rapist (i.e. capitalist) who established and led the terrorist organization Micro$oft. He is one of the most rich and evil individuals in history who took over the world by force establishing the malware operating system Window$ as the common operating system, nowadays being dangerous especially by hiding behind his "charity organization" (see charitywashing) which has been widely criticized (even by such mainstream media as Wikipedia) but which nevertheless makes him look as someone doing "public good" in the eyes of the naive brainless NPC masses.

He is really dumb, only speaks one language and didn't even finish university. He also has no moral values, but that goes without saying for any rich businessman. He was owned pretty hard in chess by Magnus Carlsen on some shitty TV show.

When Bill was born, his father was just busy counting dollar bills, so he named him Bill. Bill was mentally retarded as a child and as such had to attend a private school. He never really understood programming but with a below average intelligence he had a good shot at succeeding in business. Thanks to his family connections he got to Harvard where he met Steve Ballmer -- later he dropped out of the school due to his low intelligence.

In 1975 he founded Micro$oft, a malware company named after his dick. By a sequence of extremely lucky events combined with a few dick moves by Bill the company then became successful: when around the year 1980 IBM was creating the IBM PC, they came to Bill because they needed an operating system. He lied to them that he had one and sold them a license even though at the time he didn't have any OS (lol). After that he went to a programmer named Tim Paterson and basically stole (bought for some penny) his OS named QDOS and gave it to IBM, while still keeping ownership of the OS (he only sold IBM a license to use it, not exclusive rights for it). He basically fucked everyone for money and got away with it, the American way. For this he is admired by Americans.



The word binary in general refers to having two choices; in computer science binary refers to the base 2 numeral system, i.e. a system of writing numbers with only two symbols, usually 1s and 0s. We can write any number in binary just as we can with our everyday decimal system, but binary is more convenient for computers because this system is easy to implement in electronics (a switch can be on or off, i.e. 1 or 0; systems with more digits were tried but unsuccessful, they failed miserably in reliability). The word binary is also by extension used for non-textual computer files such as native executable programs or asset files for games.

One binary digit can be used to store exactly 1 bit of information. So the number of places we have for writing a binary number (e.g. in computer memory) is called a number of bits or bit width. A bit width N allows for storing 2^N values (e.g. with 2 bits we can store 4 values: 0, 1, 2 and 3, in binary 00, 01, 10 and 11).

At the basic level binary works just like the decimal (base 10) system we're used to. While the decimal system uses powers of 10, binary uses powers of 2. Here is a table showing a few numbers in decimal and binary:

decimal binary
0 0
1 1
2 10
3 11
4 100
5 101
6 110
7 111
8 1000
... ...

Conversion to decimal: let's see an example that utilizes the facts mentioned above. Let's have a number that's written as 10135 in decimal. The first digit from the right (5) says the number of 10^(0)s (1s) in the number, the second digit (3) says the number of 10^(1)s (10s), the third digit (1) says the number of 10^(2)s (100s) etc. Similarly if we now have a number 100101 in binary, the first digit from the right (1) says the number of 2^(0)s (1s), the second digit (0) says the number of 2^(1)s (2s), the third digit (1) says the number of 2^(2)s (4s) etc. Therefore this binary number can be converted to decimal by simply computing 1 * 2^0 + 0 * 2^1 + 1 * 2^2 + 0 * 2^3 + 0 * 2^4 + 1 * 2^5 = 1 + 4 + 32 = 37.

To convert from decimal to binary we can use a simple algorithm that's again derived from the above. Let's say we have a number X we want to write in binary. We will write digits from right to left. The first (rightmost) digit is the remainder after integer division of X by 2. Then we divide the number by 2. The second digit is again the remainder after division by 2. Then we divide the number by 2 again. This continues until the number is 0. For example let's convert the number 22 to binary: first digit = 22 % 2 = 0; 22 / 2 = 11, second digit = 11 % 2 = 1; 11 / 2 = 5; third digit = 5 % 2 = 1; 5 / 2 = 2; 2 % 2 = 0; 2 / 2 = 1; 1 % 2 = 1; 1 / 2 = 0. The result is 10110.

TODO: operations in binary

In binary it is very simple and fast to divide and multiply by powers of 2 (1, 2, 4, 8, 16, ...), just as it is simply to divide and multiple by powers of 10 (1, 10, 100, 1000, ...) in decimal (we just shift the radix point, e.g. the binary number 1011 multiplied by 4 is 101100, we just added two zeros at the end). This is why as a programmer you should prefer working with powers of two (your programs can be faster if the computer can perform basic operations faster).

Binary can be very easily converted to and from hexadecimal and octal because 1 hexadecimal (octal) digit always maps to exactly 4 (3) binary digits. E.g. the hexadeciaml number F0 is 11110000 in binary (1111 is always equaivalent to F, 0000 is always equivalent to 0). This doesn't hold for the decimal base, hence programmers often tend to avoid base 10.

We can work with the binary representation the same way as with decimal, i.e. we can e.g. write negative numbers such as -110101 or rational numbers (or even real numbers) such as 1011.001101. However in a computer memory there are no other symbols than 1 and 0, so we can't use extra symbols such as - or . to represent such values. So if we want to represent more numbers than non-negative integers, we literally have to only use 1s and 0s and choose a specific representation/format/encoding of numbers -- there are several formats for representing e.g. signed (potentially negative) or rational (fractional) numbers, each with pros and cons. The following are the most common number representations:

As anything can be represented with numbers, binary can be used to store any kind of information such as text, images, sounds and videos. See data structures and file formats.

See Also


Bit Hack

Bit hacks or bit tricks are simple clever formulas for performing useful operations with binary numbers. Some operations, such as checking if a number is power of two or reversing bits in a number, can be done very efficiently with these hacks, without using loops, branching and other undesirably slow operations, potentially increasing speed and/or decreasing size and/or memory usage of code -- this can help us optimize. Many of these can be found on the web and there are also books such as Hacker's Delight which document such hacks.


Basic bit manipulation techniques are common and part of general knowledge so they won't be listed under hacks, but for sake of completeness and beginners reading this we should mention them here. Let's see the basic bit manipulation operators in C:

Specific Bit Hacks

{ Work in progress. I'm taking these from various sources such as the Hacker's Delight book or web and rewriting them a bit, always testing. Some of these are my own. ~drummyfish }

Unless noted otherwise we suppose C syntax and semantics and integer data types. Keep in mind all potential dangers, for example it may sometimes be better to write an idiomatic code and let compiler do the optimization that's best for given platform, also of course readability will worsen etc. Nevertheless as a hacker you should know about these tricks, it's useful for low level code etc.

2^N: 1 << N

absolute value of x (two's complement):

  int t = x >> (sizeof(x) * 8 - 1);
  x = (x + t) ^ t;

average x and y without overflow: (x & y) + ((x ^ y) >> 1) { TODO: works with unsigned, not sure about signed. ~drummyfish }

clear (to 0) Nth bit of x: x & ~(1 << N)

clear (to 0) rightmost 1 bit of x: x & (x - 1)

conditionally add (subtract etc.) x and y based on condition c (c is 0 or 1): x + ((0 - c) & y), this avoids branches AND ALSO multiplication by c, of course you may replace + by another operators.

count 0 bits of x: Count 1 bits and subtract from data type width.

count 1 bits of x (8 bit): We add neighboring bits in parallel, then neighboring groups of 2 bits, then neighboring groups of 4 bits.

  x = (x & 0x55) + ((x >> 1) & 0x55);
  x = (x & 0x33) + ((x >> 2) & 0x33);
  x = (x & 0x0f) + (x >> 4);

count 1 bits of x (32 bit): Analogous to 8 bit version.

  x = (x & 0x55555555) + ((x >> 1) & 0x55555555);
  x = (x & 0x33333333) + ((x >> 2) & 0x33333333);
  x = (x & 0x0f0f0f0f) + ((x >> 4) & 0x0f0f0f0f);
  x = (x & 0x00ff00ff) + ((x >> 8) & 0x00ff00ff);
  x = (x & 0x0000ffff) + (x >> 16);

count leading 0 bits in x (8 bit):

  int r = (x == 0);
  if (x <= 0x0f) { r += 4; x <<= 4; }
  if (x <= 0x3f) { r += 2; x <<= 2; }
  if (x <= 0x7f) { r += 1; }

count leading 0 bits in x (32 bit): Analogous to 8 bit version.

  int r = (x == 0);
  if (x <= 0x0000ffff) { r += 16; x <<= 16; }
  if (x <= 0x00ffffff) { r += 8; x <<= 8; }
  if (x <= 0x0fffffff) { r += 4; x <<= 4; }
  if (x <= 0x3fffffff) { r += 2; x <<= 2; }
  if (x <= 0x7fffffff) { r += 1; }

divide x by 2^N: x >> N

divide x by 3 (unsigned at least 16 bit, x < 256): ((x + 1) * 85) >> 8, we use kind of a fixed point multiplication by reciprocal (1/3), on some platforms this may be faster than using the divide instruction, but not always (also compilers often do this for you). { I checked this particular trick and it gives exact results for any x < 256, however this may generally not be the case for other constants than 3. Still even if not 100% accurate this can be used to approximate division. ~drummyfish }

divide x by 5 (unsigned at least 16 bit, x < 256): ((x + 1) * 51) >> 8, analogous to divide by 3.

get Nth bit of x: (x >> N) & 0x01

is x a power of 2?: x && ((x & (x - 1)) == 0)

is x even?: (x & 0x01) == 0

is x odd?: (x & 0x01)

isolate rightmost 0 bit of x: ~x & (x + 1)

isolate rightmost 1 bit of x: x & (~x + 1) (in two's complement equivalent to x & -x)

log base 2 of x: Count leading 0 bits, subtract from data type width - 1.

maximum of x and y: x ^ ((0 - (x < y)) & (x ^ y))

minimum of x and y: x ^ ((0 - (x > y)) & (x ^ y))

multiply x by 2^N: x << N

multiply by 7 (and other numbers close to 2^N): (x << 3) - x

next higher or equal power of 2 from x (32 bit):

  x |= x >> 1;
  x |= x >> 2;
  x |= x >> 4;
  x |= x >> 8;
  x |= x >> 16;
  x = x + 1 + (x == 0);

parity of x (8 bit):

  x ^= x >> 1;
  x ^= x >> 2;
  x = (x ^ (x >> 4)) & 0x01;

reverse bits of x (8 bit): We switch neighboring bits, then switch neighboring groups of 2 bits, then neighboring groups of 4 bits.

  x = ((x >> 1) & 0x55) | ((x & 0x55) << 1);
  x = ((x >> 2) & 0x33) | ((x & 0x33) << 2);
  x = ((x >> 4) & 0x0f) | (x << 4);

reverse bits of x (32 bit): Analogous to the 8 bit version.

  x = ((x >> 1) & 0x55555555) | ((x & 0x55555555) << 1);
  x = ((x >> 2) & 0x33333333) | ((x & 0x33333333) << 2);
  x = ((x >> 4) & 0x0f0f0f0f) | ((x & 0x0f0f0f0f) << 4);
  x = ((x >> 8) & 0x00ff00ff) | ((x & 0x00ff00ff) << 8);
  x = ((x >> 16) & 0x0000ffff) | (x << 16);

rotate x left by N (8 bit): (x << N) | (x >> (8 - N)) (watch out, in C: N < 8, if storing in wider type also do & 0xff)

rotate x right by N (8 bit): analogous to left rotation, (x >> N) | (x << (8 - N))

set (to 1) Nth bit of x: x | (1 << N)

set (to 1) the rightmost 0 bit of x: x | (x + 1)

set or clear Nth bit of x to b: (x & ~(1 << N)) | (b << N)

sign of x (returns 1, 0 or -1): (x > 0) - (x < 0)

swap x and y (without tmp var.): x ^= y; y ^= x; x ^= y; or x -= y; y += x; x = y - x;

toggle Nth bit of x: x ^ (1 << N)

toggle x between A and B: (x ^ A) ^ B

x and y have different signs?: (x > 0) == (y > 0), (x <= 0) == (y <= 0) etc. (differs on 0:0 behavior)

TODO: the ugly hacks that use conversion to/from float?

See Also



Black, a color whose politically correct name is afroamerican, is a color that we see in absence of any light.



Blender is an "open-source" 3D modeling and rendering software -- one of the most powerful and "feature-rich" (read bloated) ones, even compared to proprietary competition -- used not only by the FOSS community, but also the industry (commercial games, movies etc.), which is an impressive achievement in itself, however Blender is also a capitalist software suffering from many not-so-nice features such as bloat.

After version 2.76 Blender started REQUIRING OpenGL 2.1 due to its "modern" EEVEE renderer, deprecating old machines and giving a huge fuck you to all users with incompatible hardware (for example the users of RYF software). This new version also stopped working with the free Nouveau driver, forcing the users to use NVidia's proprietary drivers. Blender of course doesn't at all care about this. { I've been forced to use the extremely low FPS software GL version of Blender after 2.8. ~drummyfish }



Bloat is a very wide term that in the context of software and technology means overcomplication, unnecessary complexity and/or extreme growth in terms of source code size, overall complexity, number of dependencies, redundancy, unnecessary and/or useless features (e.g. feature creep) and resource usage, all of which lead to inefficient, badly designed technology with bugs (e.g. security vulnerabilities or crashes), as well as great obscurity, ugliness, loss of freedom and waste of human effort. Simply put bloat is burdening bullshit. Bloat is extremely bad and one of the greatest technological issues of today. Creating bloat is bad engineering at its worst and unfortunately it is what's absolutely taking over all technology nowadays, mostly due to capitalism causing commercialization, consumerism and incompetent people trying to take on jobs they are in no way qualified to do.

LRS, suckless and some others rather small groups are trying to address the issue and write software that is good, minimal, safe, efficient and well functioning. Nevertheless our numbers are very small and in this endeavor we are basically standing against the whole world and the most powerful tech corporations.

The issue of bloat may of course appear outside of the strict boundaries of computer technology, nowadays we may already observe e.g. science bloat -- science is becoming so overcomplicated (many times on purpose, e.g. by means of bullshit science) that 99% people can NOT understand it, they have to BELIEVE "scientific authorities", which does not at all differ from the dangerous blind religious behavior. Any time a new paper comes out, chances are that not even SCIENTISTS from the same field but with a different specialization will understand it in depth and have to simply trust its results. This combined with self-interest obsessed society gives rise to soyence and large scale brainwashing and spread of "science approved" propaganda.

Back to technology though, one of a very frequent questions you may hear a noob ask is "How can bloat limit software freedom if such software has a free license?" Bloat de-facto limits some of the four essential freedoms (to use, study, modify and share) required for a software to be free. A free license grants these freedoms legally, but if some of those freedoms are subsequently limited by other circumstances, the software becomes effectively less free. It is important to realize that complexity itself goes against freedom because a more complex system will inevitably reduce the number of people being able to execute freedoms such as modifying the software (the number of programmers being able to understand and modify a trivial program is much greater than the number of programmers being able to understand and modify a highly complex million LOC program). As the number of people being able to execute the basic freedom drops, we're approaching the scenario in which the software is de-facto controlled by a small number of people who can (e.g. due to the cost) effectively study, modify and maintain the program -- and a program that is controlled by a small group of people (e.g. a corporation) is by definition proprietary. If there is a web browser that has a free license but you, a lone programmer, can't afford to study it, modify it significantly and maintain it, and your friends aren't able to do that either, when the only one who can practically do this is the developer of the browser himself and perhaps a few other rich corporations that can pay dozens of full time programmers, then such browser cannot be considered free as it won't be shaped to benefit you, the user, but rather the developer, a corporation.

Typical Bloat

The following is a list of software usually considered a good, typical example of bloat. However keep in mind that bloat is a relative term, for example vim can be seen as a minimalist suckless editor when compared to mainstream software (IDEs), but at the same time it's pretty bloated when compared to strictly suckless programs.

Small Bloat

Besides the typical big programs that even normies admit are bloated there exists also a smaller bloat which many people don't see as such but which is nevertheless considered unnecessarily complex by some experts and/or idealists and/or hardcore minimalists, including us.

Small bloat is a subject of popular jokes such as "OMG he uses a unicode font -- BLOAT!!!". These are good jokes, it's nice to make fun out of one's own idealism. But watch out, this doesn't mean small bloat is only a joke concept at all, it plays an important role in designing good technology. When we identify something as small bloat, we don't necessarily have to completely avoid and reject that concept, we may just try to for example make it optional. In context of today's PCs using a Unicode font is not really an issue for performance, memory consumption or anything else, but we should keep in mind it may not be so on much weaker computers or for example post-collapse computers, so we should try to design systems that don't depend on Unicode.

Small bloat includes for example:

Non-Computer Bloat

The concept of bloat can be applied even outside the computing world, e.g. to non-computer technology, art, culture, law etc. Here it becomes kind of synonymous with bullshit, but using the word bloat says we're approaching the issue as computer programmers. Examples include:


Bloat Monopoly

Bloat monopoly is an exclusive control over or de-facto ownership of software or even a whole area of technology not by legal means but by means of bloat, or generally just abusing bloat in ways that lead to gaining monopolies, e.g. by establishing standards or even legal requirements (such as the EU mandatory content filters) which only the richest may conform to. Even if given sofware is FOSS (that is its source code is public and everyone has basic legal rights to it), it can be malicious due to bloat, for example it can still be made practically controlled exclusively by the developer because the developer is the only one with sufficient resources and/or know-how to be able to execute the basic rights such as meaningful modifications of the software, which goes against the very basic principle of free software.

Example: take a look at the web and how Google is gaining control over it by getting the search engine monopoly. It is very clear web along with web browsers has been becoming bloated to ridiculous levels -- this is not a coincidence, bloat is pushed by corporations such as Google to eliminate possible emerging competition. If practically all websites require JavaScript, CSS, HTTPS and similar nonsense, it is becoming much more difficult to crawl them and create a web index, leaving the possibility to crawl the web mostly to the rich, i.e. those who have enough money, time and know-how to do this. Alongside this there is the web browser bloat -- as websites have become extremely complex, it is also extremely complex to make and maintain a web browser, which is why there is only a few of them, all controlled (despite FOSS licenses) by corporations and malicious groups, one of which is Google itself. For these reasons Google loves bloat and encourages it, e.g. simply by ranking bloated webpages better in their search results, and of course by other means (sponsoring, lobbying, advertising, ...).

Bloat monopoly is capitalism's circumvention of free licenses and taking advantage of their popularity. With bloat monopoly capitalists can stick a FOSS license to their software, get an automatic approval (openwashing) of most "open-source" fanbois as well as their free work time, while really staying in control almost to the same degree as with proprietary software.

Examples of bloat monopoly include mainstream web browsers (furryfox, chromium, ...), Android, Linux, Blender etc. This software is characteristic by its difficulty to be even compiled, yet alone understood, maintained and meaningfully modified by a lone average programmer, by its astronomical maintenance cost that is hard to pay for volunteers, and by aggressive update culture.


Body Shaming

Your body sucks.



Brainfuck is an extremely simple, untyped esoteric programming language; simple by its specification (consisting only of 8 commands) but intentionally very hard to program in. It works similarly to a pure Turing machine. In a way it is kind of beautiful by its simplicity. It is very easy to write your own brainfuck interpreter.

There exist self-hosted brainfuck interpreters which is pretty fucked up.

The language is based on a 1964 language P´´ which was published in a mathematical paper; it is very similar to brainfuck except for having no I/O.

Brainfuck has seen tremendous success in the esolang community as the lowest common denominator language: just as mathematicians use Turing machines in proofs, esolang programmers use brainfuck in similar ways -- many esolangs just compile to brainfuck or use brainfuck in proofs of Turing completeness etc. This is thanks to brainfuck being an actual, implemented and working language reflecting real computers, not just a highly abstract mathematical model with many different variants. For example if one wants to encode a program as an integer number, we can simply take the binary representation of the program's brainfuck implementation.

In LRS programs brainfuck may be seriously used as a super simple scripting language.


The "vanilla" brainfuck operates as follows:

We have a linear memory of cells and a data pointer which initially points to the 0th cell. The size and count of the cells is implementation-defined, but usually a cell is 8 bits wide and there is at least 30000 cells.

A program consists of these possible commands:


This is a very simple C implementation of brainfuck:

#include <stdio.h>

#define CELLS 30000

const char program[] = ",[.-]"; // your program here

int main(void)
  char tape[CELLS];
  unsigned int cell = 0;
  const char *i = program;
  int bDir, bCount;
  while (*i != 0)
    switch (*i)
      case '>': cell++; break;
      case '<': cell--; break;
      case '+': tape[cell]++; break;
      case '-': tape[cell]--; break;
      case '.': putchar(tape[cell]); fflush(stdout); break;
      case ',': scanf("%c",tape + cell); break;
      case '[':
      case ']':
        if ((tape[cell] == 0) == (*i == ']'))

        bDir = (*i == '[') ? 1 : -1;
        bCount = 0;
        while (1)
          if (*i == '[')
            bCount += bDir;
          else if (*i == ']')
            bCount -= bDir;
          if (bCount == 0)
          i += bDir;
      default: break;


Here are some simple programs in brainfuck.

Print HI:

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ . + .

Read two 0-9 numbers (as ASCII digits) and add them:


TODO: more




Brain Software

Brain software, also brainware, is kind of a fun idea of software that runs on the human brain as opposed to a computer. This removes the dependency on computers and highly increases freedom. Of course, this also comes with a huge drop of computational power :) However, aside from being a fun idea to explore, this kind of software and "architectures" may become interesting from the perspective of freedom and primitivism (especially when the technological collapse seems like a real danger).

Primitive tools helping the brain compute, such as pen and paper or printed out mathematical tables, may be allowed.

Example of brain software can be the game of chess. Chess masters can easily play the game without a physical chess board, only in their head, and they can play games with each other by just saying the moves out loud. They may even just play games with themselves, which makes chess a deep, entertaining game that can be 100% contained in one's brain. Such game can never be taken away from the person, it can't be altered by corporations, it can't become unplayable on new hardware etc., making it free to the greatest extent.

One may think of a pen and paper computer with its own simple instruction set that allows general purpose programming. This instruction set may be designed to be well interpretable by human and it may be accompanied by tables printed out on paper for quick lookup of operation results -- e.g. a 4 bit computer might provide a 16x16 table with precomputed multiplication results which would help the person execute the multiplication instruction within mere seconds.








Bytebeat is a procedural chiptune/8bit style music generated by a short expression in a programming language; it was discovered/highlighted in 2011 by Viznut (author of countercomplex blog) and others, and the technique capable of producing quite impressive music by single-line code has since caught the attention of many programmers, especially in demoscene. There has even been a paper written about bytebeat. Bytebeat can produce music similar (though a lot simpler) to that created e.g. with music trackers but with a lot less complexity and effort.

This is a beautiful hack for LRS/suckless programmers because it takes quite a tiny amount of code, space and effort to produce nice music, e.g. for games (done e.g. by Anarch).

8bit samples corresponding to unsigned char are typically used with bytebeat. The formulas take advantage of overflows that create rhythmical patterns with potential other operations such as multiplication, division, addition, squaring, bitwise/logical operators and conditions adding more interesting effects.

Bytebeat also looks kind of cool when rendered as an image (outputting pixels instead of musical samples).

How To

Quick experiments with bytebeat can be performed with online tools that are easy to find on the web, these usually use JavaScript.

Nevertheless, traditionally we use C for bytebeat. We simply create a loop with a time variable (i) and inside the loop body we create our bytebeat expression with the variable to compute a char that we output.

A simple "workflow" for bytebeat "development" can be set up as follows. Firstly write a C program:

#include <stdio.h>

int main(void)
  for (int i = 0; i < 10000; ++i)
      i / 3 // < bytebeat formula here

  return 0;

Now compile the program and play its output e.g. like this:

gcc program.c && ./a.out | aplay

Now we can just start experimenting and invent new music by fiddling with the formula indicated by the comment.

General tips/tricks and observations are these:


It is not exactly clear whether, how and to what extent copyright can apply to bytebeat: on one hand we have a short formula that's uncopyrightable (just like mathematical formulas), on the other hand we have music, an artistic expression. Many authors of bytebeat "release" their creations under free licenses such as CC-BY-SA, but such licenses are of course not applicable if copyright can't even arise.

We believe copyright doesn't and SHOULDN'T apply to bytebeat. To ensure this, it is good to stick CC0 to any released bytebeat just in case.


A super-simple example can be just a simple:

The following more complex examples come from the LRS game Anarch (these are legally safe even in case copyright can apply to bytebeat as Anarch is released under CC0):

See Also



Cancer is similar to shit but is even worse because it spreads itself and infects anything else it touches (it is a subset of shit).

See Also



Capitalism is how you enslave people with their approval.

Capitali$m is the worst socioeconomic system we've yet seen in history,^source based on pure greed, culture of slavery and artificially sustained conflict between everyone in society (so called competition), abandoning all morals and putting money and profit (so called capital) above everything else including preservation of life itself, capitalism fuels the worst in people and forces them to compete and suffer for basic resources, even in a world where abundance of resources is already possible to achieve. Capitalism goes against progress (see e.g. antivirus paradox), good technology and freedom, it supports immense waste of resources, wars, abuse of people and animals, destruction of environment, decline of morals, deterioration of art, invention of bullshit (bullshit jobs, bullshit laws, ...), utilizing and perfecting methods of torture, brainwashing, censorship and so on. In a sense capitalism can be seen as slavery 2.0 or universal slavery, a more sophisticated form of slavery, one which denies the label by calling itself the polar opposite ("freedom") and manipulates people into them approving and voluntarily parttaking in their own enslavement. However wage and consumption slavery is only a small part of capitalist dystopia -- capitalism brings on destruction basically to every part of civilization. It it also often likened to a cancer of society; one that is ever expanding, destroying everything with commercialism, materialism, waste and destruction, growing uncontrollably with the sole goal of just never stop an ever accelerating growth. Nevertheless, it's been truthfully stated that "it is now easier to imagine the end of all life than any substantial change in capitalism." Another famous quote is that "capitalism is the belief that the worst of men driven by the nastiest motives will somehow work for the benefit of everyone", which is describes its principle quite well.

{ Some web bashing capitalism I just found: http://digdeeper.club/articles/capitalismcancer.xhtml, read only briefly, seems to contain some nice gems capturing the rape of people. ~drummyfish }

Capitalism is fundamentally flawed and CANNOT be fixed -- capitalists build on the idea that competition will drive society, that market will be self sustaining, however capitalism itself works for instating the rule of the winners who eliminate their competition, capitalism is self destabilizing, i.e. the driving force of capitalism is completely unsustainable and leads to catastrophic results as those who get ahead in working competition are also in advantage -- as it's said: money makes money, therefore money flow from the poor to the rich and create a huge imbalance in which competition has to be highly forced, eventually completely arbitrarily and in very harmful ways (invention of bullshit jobs, creating artificial needs and hugely complex state control and laws). It's as if we set up a race in which those who get ahead start to also go faster, and those become the ones who oversee and start to create the rules of the race -- expecting a sustained balance in such a race is just insanity. Society tries to "fight" this emerging imbalance with various laws and rules of market, but this effort is like trying to fight math itself -- the system is mathematically destined to be unstable, pretending we can win over laws of nature themselves is just pure madness.

Capitalism produces the worst imaginable technology and rewards people for being cruel to each other. It points the direction of society towards a collapse and may very likely be the great filter of civilizations; in capitalism people de-facto own nothing and become wholly dependent on corporations which exploit this fact to abuse them as much as possible. This is achieved by slowly boiling the frog. No one owns anything, products become services (your car won't drive without Internet connection and permission from its manufacturer), all independency and decentralization is lost in favor of a highly fragile and interdependent economy and infrastructure of services, each one controlled by the monopoly corporation. Then only a slight break in the chain is enough to bring the whole civilization down in a spectacular domino effect.

The underlying issue of capitalism is competition -- competition is the root of all evil in any social system, however capitalism is the absolute glorification of competition, amplification of this evil to maximum. It is implemented by setting and supporting a very stupid idea that everyone's primary and only goal is to be self-benefit, i.e. maximization of capital. This is combined with the fact that the environment of free market is a system with evolutionary system which through natural selection extremely effectively and quickly optimizes the organisms (corporations) for achieving this given goal, i.e. generating maximum profit, on the detriment of all other values such as wellbeing of people, sustainability or morality. In other words capitalism has never promised a good society, it literally only states that everyone should try to benefit oneself as much as possible, i.e. defines the fitness function purely as the ability to seize as many resources as possible, and then selects and rewards those who best implement this function, i.e. those we would call sociopaths or "dicks", and to those is given the power in society. Yes, this is how nature works, but it must NOT be how a technologically advanced civilization with unlimited power of destruction should work. In other words we simply get what we set to achieve: find entities that are best at making profit at any cost. The inevitable decline of society can not possibly be prevented by laws, any effort of trying to stop evolution by inventing artificial rules on the go is a battle against nature itself and is extremely naive, the immense power of the evolutionary system that's constantly at work to find ways to bypass or cancel laws in the way of profit and abuse of others will prevails just as life will always find its way to survive and thrive even in the worst conditions on Earth. Trying to stop corporations with laws is like trying to stop a train by throwing sticks in its path. The problem is not that "people are dicks", it is that we choose to put in place a system that rewards the dicks, a system that fuels the worst in people and smothers the best in them.

Even though nowadays quite a lot of time has passed since times of Marx and capitalism has evolved to a stage with countless disastrous issues Marx couldn't even foresee, it is useful to mention one of the basic and earliest issues identified by Marx, which is that economically capitalism is based on stealing the surplus value, i.e. abuse of workers and consumers by owners of the means of production (factories, machines etc.) -- a capitalist basically takes money for doing nothing, just for letting workers use tools he proclaims to own (a capitalist will proclaim to "own" land that he never even visited, machines he didn't make, nowadays he even claims to own information and ideas). This allows a capitalist oppressor to make exponentially more money for nothing and enables existence of monstrously rich and powerful individuals -- consider for example that nowadays there are people who own thousands of buildings along with private planes and private islands. It is not possible for any single human to work an equivalent of effort that's needed to produce what such an individual owns, even if he worked 24 hours a day for his whole life, he wouldn't get even close to matching the kind of effort that's needed to produce thousands of buildings he owns -- any such great wealth is always stolen from countless workers whose salary is less that what's adequate for their works and also from consumers who pay more than it really costs to manufacture the goods they buy. Millions of people are giving their money (resources) for free to someone who just proclaims to "own" tools and even natural resources that have been here for billions of years.

But nowadays capitalism is NOT JUST an economic system anymore. Technically perhaps, however in reality it takes over society to such a degree that it starts to redefine very basic social and moral values to the point of taking the role of a religion, or better said a brainwashing cult in which people are since childhood taught (e.g. by constant daily exposure to private media) to worship economy, brands, engage in cults of personalities (see myths about godlike entrepreneurs) and productivity (i.e. not usefulness, morality, efficiency or similar values, just the pure ability to produce something for its own sake). Close minded people will try to counter argue in shallow ways such as "but religion has to have some supernatural entity called God" etc. Again, technically speaking this may be correct, but if we don't limit our views by arbitrary definitions of words, we see that the effects of capitalism on society are de facto of the same or even greater scale than those of religion, and they are certainly more negative. Capitalism itself works towards suppressing traditional religions (showing it is really competing with them and therefore aspiring for the same role) and their values and trying to replace them with worship of money, success and self interest, it permeates society to the deepest levels by making every single area of society a subject of business and acting on the minds of all people in the society every single day which is an enormously strong pressure that strongly shapes mentality of people, again mostly negatively towards a war mentality (constant competition with others), egoism, materialism, fascism, pure pursuit of profit etc.

From a certain point of view capitalism is not really a traditional socioeconomic system, it is the failure to establish one -- capitalism is the failure to prevent the establishment of capitalism, and it is also the punishment for this failure. It is the continuation of the jungle to the age when technology for mass production, mass surveillance etc. has sufficiently advanced -- capitalism will arise with technological progress unless we prevent it, just as cancer will grow unless we treat it in very early stages. This is what people mean when they say that capitalism simply works or that it's natural -- it's the least effort option, one that simply lets people behave like animals, except that these animals are now equipped with weapons of mass destruction, tools for implementing slavery, world wide surveillance etc. It is natural in the same way in which wars, murders, bullying and deadly diseases are. It is the most primitive system imaginable, it is uncontrolled, leads to suffering and self-destruction.

Under capitalism you are not a human being, you are a resource, at best a machine that's useful for some time but becomes obsolete and undesired once it outlives its usefulness and potential to be exploited. Under capitalism you are a slave that's forced to live the 3C life: conform, consume, compete. Or, as Encyclopedia dramatica puts it: work, buy, consume, die.

Attributes Of Capitalism

The following is a list of just SOME attributes of capitalism -- note that not all of them are present in initial stages but capitalism will always converge towards them.

How It Works

Capitalism newly instated in a society kind of "works" for a short time, but it never lasts as it is extremely unstable. Before society has advanced technologically, capitalism can deteriorate slowly and seem to be working for decades or even centuries, but after a sufficient technological progress the downfall accelerates immensely. Initially when more or less everyone is at the same start line, when there are no highly evolved corporations with their advanced methods of oppression, small businesses grow and take their small shares of the market, there appears true innovation, businesses compete by true quality of products, people are relatively free and it all feels natural because it is, it's the system of the jungle, i.e. as has been said, capitalism is the failure to establish a controlled socioeconomic system rather than a presence of a purposefully designed one. Its benefits for the people are at this point only a side effect, people see it as good and continue to support it. However the system has other goals of its own, and that is the development and constant growth that's meant to create a higher organism just like smaller living cells formed us, multi cell organisms. The system will start being less and less beneficial to the people who will only become cells in a higher organism to which they'll become slaves. A cell isn't supposed to be happy, it is supposed to sacrifice its life for the good of the higher organism.

{ This initial prosperous stage appeared e.g. in Czechoslovakia, where I lived, in the 90s, after the fall of the totalitarian regime. Everything was beautiful, sadly it didn't last longer than about 10 years. ~drummyfish }

Slowly "startups" evolve to medium sized businesses and a few will become the big corporations. These are the first higher order entities that have an intelligence of their own, they are composed of humans and technology who together work solely for the corporation's further growth. A corporation has a super human intelligence (combined intelligence of its workers) but has no human emotion or conscience (which is suppressed by the corporation's structure), it is basically the rogue AI we read about in sci-fi horror movies. Corporation selects only the worst of humans for the management positions and has further mechanisms to eliminate any effects of human conscience and tendency for ethical behavior; for example it works on the principle of "I'm just doing my job": everyone is just doing a small part of what the whole company is doing so that no one feels responsible for the whole or sometimes doesn't even know what he's part of. If anyone protests, he's replaced with a new hire. Of course, many know they're doing something bad but they have no choice if they want to feed their families, and everyone is doing it.

Corporations make calculated decisions to eliminate any competition, they devour or kill smaller businesses with unfair practices (see e.g. the Microsoft's infamous EEE), more marketing and by other means, both legal and illegal. They develop advanced psychological methods and extort extreme pressure such as brainwashing by ads to the population to create an immensely powerful propaganda that bends any natural human thinking. With this corporations no longer need to satisfy the demand, they create the demand arbitrarily. They create artificial scarcity, manipulate the market, manipulate the people, manipulate laws. At this point they've broken the system, competition no longer works as idealized by theoretical capitalists, corporations can now do practically anything they want.

This is an evolutionary system in which the fitness function is simply the ability to make capital. Entities involved in the market are simply chosen by natural selection to be the ones that best make profit, i.e. who are best at circumventing laws, brainwashing, hiding illegal activities etc. Ethical behavior is a disadvantage that leads to elimination; if a business decides to behave ethically, it is outrun by the one who doesn't have this weakness.

The unfair, unethical behavior of corporations is still supposed to be controlled by the state, however corporations become stronger and bigger than states, they can manipulate laws by lobbying, financially supporting preferred candidates, brainwashing people via private media and so on. States are the only force left supposed to protect people from this pure evil, but they are too weak; a single organization of relatively few people who are, quite importantly, often corporation managers, won't compete against a plethora of the best warriors selected by the extremely efficient system of free market. States slowly turn to serving corporations, becoming their tools and then slowly dissolve (see how small role the US government already plays). This leads to "anarcho capitalism", the worst stage of capitalism where there is no state, no entity supposed to protect the people, there is only one rule and that is the unlimited rule of the strongest.

Here the strongest corporation takes over the world and starts becoming the higher order organism of the whole Earth, capitalist singularity has been reached. The world corporation doesn't have to pretend anything at this point, it can simply hire an army, it can use physical force, chemical weapons, torture, unlimited surveillance, anything to achieve further seize of remaining bits of power and resources.

People will NOT protest or revolt at this point, they will accept anything that comes and even if they suffer everyday agony and the system is clearly obviously set up for their maximum exploitation, they will do nothing -- in fact they will continue to support the system and make it stronger and they will see more slavery as more freedom; this tendency is already present in rightists today. You may ask why, you think that at some point people will have enough and will seize back their power. This won't happen, just as the billions of chicken and pigs daily exploited at factories won't ever revolt -- firstly because the system will have absolute control over people at this point, they will be 100% dependent on the system even if they hate it, they will have proprietary technology as part of their bodies (which they willingly admitted to in the past as part of bigger comfort while ignoring our warnings about loss of freedom), they will be dependent on drugs of the system (called "vaccines" or "medicine"), air that has to be cleaned and is unbreathable anywhere one would want to escape, 100% of communication will be monitored to prevent any spark of revolution etc. Secondly the system will have rewritten history so that people won't see that life used to be better and bearable -- just as today we think we live in the best times of history due to the interpretation of history that was force fed us at schools and by other propaganda, in the future a human in every day agony will think history was even worse, that there is no other option than for him to suffer every day and it's a privilege he can even live that way.

We can only guess what will happen here, a collapse due to instability or total destruction of environment is possible, which would at least save the civilization from the horrendous fate of being eternally tortured. If the system survives, humans will be probably be more and more genetically engineered to be more submissive, further killing any hope of a possible change, surveillance chips will be implanted to everyone, reproduction will be controlled precisely and finally perhaps the system will be able, thanks to an advanced AI, to exist and work more efficiently without humans completely, so they will be eliminated. This is how the mankind ends.

{ So here you have it -- it's all here for anyone to read, explained and predicted correctly and in a completely logical way, we even offer a way to prevent this and fix the system, but no one will do it because this will be buried and censored by search engines and the 0.0000000000001% who will find this by chance will dismiss it due to the amount of brainwashing that's already present today. It's pretty sad and depressive, but what more can we do? ~drummyfish }

Capitalist Propaganda And Fairy Tales

Capitalist brainwashing is pretty sophisticated -- unlike with centralized oppressive regimes, capitalism has a decentralized way of creating and spreading propaganda, in ways similar to for example self-replicating and self-modifying malware in the world of software. Creators and promoters of capitalist propaganda are mostly people who are unaware of doing so, they have been brainwashed and programmed by the system itself to behave that way, for example just by being exposed to hearing the capitalist fairy tales since they were born. Some examples of common capitalist propaganda you will probably encounter are the following:

So What To Replace Capitalism With?

See less retarded society.


Capitalist Singularity

Capitalist singularity is a point in time at which capitalism becomes irreversible and the cancerous growth of society unstoppable due to corporations taking absolute control over society. It is when people lose any power to revolt against corporations as corporations become stronger than states and any other collective effort towards their control.

This is similar to the famous technological singularity, the difference being that society isn't conquered by a digital AI but rather a superintelligent entity in a form of corporation. While many people see the danger of superintelligent AIs, surprisingly not many have noticed that we've already seen rise of such AIs -- corporations. A corporation is an entity much more intelligent than any single individual, with the single preprogrammed goal of profit. A corporation doesn't have any sense of morals as morals are an obstacle towards making profit. A corporation runs on humans but humans don't control them; there are mechanisms in place to discourage moral behavior of people inside corporations and anyone exhibiting such behavior is simply replaced.


Capitalist Software

Capitalist software is software that late stage capitalism produces and is practically 100% shitty modern bloat and malware hostile to its users, made with the sole goal of benefiting its creator (often a corporation). Capitalist software is not just proprietary corporate software, but a lot of times "open source", indie software and even free software that's just infected by the toxic capitalist environment -- this infection may come deep even into the basic design principles, even such things as UI design, priorities and development practices and subtle software behavior which have simply all been shaped by the capitalist pressure on abusing the user.

{ Seriously I don't have enough brain to understand how anyone can accept this shit. ~drummyfish }

Capitalist software largely mimics in technology what capitalist economy is doing in society -- for example it employs huge waste of resources (computing resources such as RAM and CPU cycles as an equivalent to natural resources) in favor of rapid growth (accumulation of "features"), it creates hugely complex, interdependent and fragile ever growing networks (tons of library of hardware dependencies as an equivalent of import/export dependencies of countries) and employs consumerism (e.g. in form of mandatory frequent updates). These effects of course bring all the negative implications along and lead to highly inefficient, fragile, bloated, unethical software.

Basically everyone will agree that corporate software such as Windows is to a high degree abusive to its users, be it by its spying, unjustified hardware demands, forced non customizability, price etc. A mistake a lot of people make is to think that sticking a free license to similar software will simply make it magically friendly to the user and that therefore most FOSS programs are ethical and respect its users. This is sadly not the case, a license if only the first necessary step towards freedom, but not a sufficient one -- other important steps have to follow.

A ridiculous example of capitalist software is the most consumerist type: games. AAA games are pure evil that no longer even try to be good, they just try to be addictive like drugs. Games on release aren't even supposed to work correctly, tons of bugs are the standard, something that's expected by default, customers aren't even meant to receive a finished product for their money. They aren't even meant to own the product or have any control over it (lend it to someone, install it on another computer, play it offline or play it when it gets retired). These games spy on people (via so called anti-cheat systems), are shamelessly meant to be consumed and thrown away, purposefully incompatible ("exclusives"), bloated, discriminative against low-end computers and even targeting attacks on children ("lootboxes"). Game corporations attack and take down fan modification and remakes and show all imaginable kinds of unethical behavior such as trying to steal rights for maps/mods created with the game's editor (Warcraft: Reforged).

But how can possibly a FOSS program be abusive? Let's mention a few examples:

The essential issue of capitalist software is in its goal: profit. This doesn't have to mean making money directly, profit can also mean e.g. gaining popularity and political power. This goal goes before and eventually against goals such as helping and respecting the users. A free license is a mere obstacle on the way towards this goal, an obstacle that may for a while slow down corporation from abusing the users, but which will eventually be overcome just by the sheer power of the market environment which works on the principles of Darwinian evolution: those who make most profit, by any way, survive and thrive.

Therefore "fixing" capitalist software is only possible via redefinition of the basic goal to just developing selfless software that's good for the people (as opposed to making software for profit). This approach requires eliminating or just greatly limiting capitalism itself, at least from the area of technology. We need to find other ways than profit to motivate development of software and yes, other ways do exist (morality, social status, fun etc.).



Welcome to the cathedral. Here we mourn the death of technology by the hand of capitalism.

{ Sometimes we are very depressed from what's going on in this world, how technology is raped and used by living beings against each other. Seeing on a daily basis the atrocities done to the art we love and the atrocities done by it -- it is like watching a living being die. Sometimes it can help to just know you are not alone. ~drummyfish }

           R. I. P.

      long time ago - now

  Here lies technology who was
helping people tremendously until
its last breath. It was killed by



CC0 is a waiver (similar to a license) of copyright, created by Creative Commons, that can be used to dedicate one's work to the public domain (kind of).

Unlike a license, a waiver such as this removes (at least effectively) the author's copyright; by using CC0 the author willingly gives up his own copyright so that the work will no longer be owned by anyone (while a license preserves the author's copyright while granting some rights to other people). It's therefore the most free and permissive option for releasing intellectual works. CC0 is designed in a pretty sophisticated way, it also waives "neighboring rights" (moral rights), and also contains a fallback license in case waiving copyright isn't possible in a certain country. For this CC0 is one of the best ways, if not the best, of truly and completely dedicating works to public domain world-wide (well, at least in terms of copyright). In this world of extremely fucked up intellectual property laws it is not enough to state "my work is public domain" -- you need to use something like CC0 to achieve legally valid public domain status.

CC0 is recommended by LRS for both programs and other art -- however for programs additional waivers of patents should be added as CC0 doesn't deal with patents. CC0 is endorsed by the FSF but not OSI (who rejected it because it explicitly states that trademarks and patents are NOT waived).

Things Under CC0

Here are some things and places with CC0 materials that you can use in your projects so that you can release them under CC0 as well. BEWARE: if you find something under CC0, do verify it's actually valid, normies often don't know what CC0 means and happily post derivative works of proprietary stuff under CC0.





Just kidding, LRS wiki is omnipresent :)

Censorship means intentional effort towards preventing exchange of certain kind of information among other individuals, for example suppression of free speech, altering old works of art for political reasons, forced takedowns of copyrighted material from the Internet etc. Note that thereby censorship does NOT include some kinds of data or information filtering, for example censorship does not include filtering out noise such as spam on a forum or static from audio (as noise is a non-information) or PERSONAL avoidance of certain information (e.g. using adblock or hiding someone's forum posts ONLY FOR ONESELEF). Censorship is always wrong -- in a good society there is never a slightest reason to censor anything, therefore whenever censorship is deemed the best solution, something within the society is deeply fucked up. In current society censorship, along with propaganda, brainwashing and misinformation, is extremely prevalent and growing -- it's being pushed not only by governments and corporations but also by harmful terrorist groups such as LGBT and feminism who force media censorship (e.g. that of Wikipedia or search engines) and punishment of free speech (see political correctness and "hate speech").

Sometimes it is not 100% clear which action constitutes censorship: for example categorization such as moving a forum post from one thread to another (possibly less visible) thread may or may not be deemed censorship -- this depends on the intended result of such action; moving a post somewhere else doesn't remove it completely but can make it less visible. Whether something is censorship always depends on the answer to the question: "does the action prevent others from information sharing?".

There exist tools for bypassing censorship, e.g. proxies or encrypted and/or distributed, censorship-resistant networks such as Tor, Freenet, I2P or torrent file sharing. Watch out: using such tools may be illegal or at least make you look suspicious and be targeted harder by state surveillance.





In mathematics chaos is a phenomenon that makes it extremely difficult to predict, even approximately, the result of some process even if we completely know how the process works and what state it starts in. In more technical terms chaos is a property of a nonlinear deterministic system in which even a very small change in input creates a great change in the output, i.e. the system is very sensitive to initial conditions. Chaos is a topic studied by the field called chaos theory and is important in all science. In computer science it is important for example for the generation of pseudorandom numbers or in cryptography. Every programmer should be familiar with the existence of chaotic behavior because in mathematics (programming) it emerges very often, it may pose a problem but, of course, it may be taken advantage of as well.

Perhaps the most important point is that a chaotic system is difficult to predict NOT because of randomness, lack of information about it or even its incomprehensible complexity (many chaotic systems are defined extremely simply), but because of its inherent structure that greatly amplifies any slight nudge to the system and gives any such nudge a great significance. This may be caused by things such as feedback loops and domino effects. Generally we describe this behavior as so called butterfly effect -- we liken this to the fact that a butterfly flapping its wings somewhere in a forest can trigger a sequence of events that may lead to causing a tornado in a distant city a few days later.

Examples of chaotic systems are the double pendulum, weather (which is why it is so difficult to predict it), dice roll, rule 30 cellular automaton, logistic map, gravitational interaction of N bodies or Lorenz differential equations. Langton's ant sometimes behaves chaotically. Another example may be e.g. a billiard table with multiple balls: if we hit one of the balls with enough strength, it'll shoot and bounce off of walls and other balls, setting them into motion and so on until all balls come to stop in a specific position. If we hit the ball with exactly the same strength but from an angle differing just by 1 degree, the final position would probably end up being completely different. Despite the system being deterministic (governed by exact and predictable laws of motion, neglecting things like quantum physics) a slight difference in input causes a great different in output.

A simple example of a chaotic equation is also the function sin(1/x) for x near 0 where it oscillates so quickly that just a tiny shift along the x axis drastically changes the result. See how unpredictable results a variant of the function can give:

x 1000 * sin(10^9 / x)
4.001 455,...
4.002 818,...
4.003 -511,...
4.004 -974,...
4.005 -335,...



Cheating means circumventing or downright violating rules, usually while trying to keep this behavior secret. You can cheat on your partner, in games, in business etc., however despite cheating seeming like purely immoral behavior at first, it may be even relatively harmless or even completely moral, e.g. in computer graphics we sometimes "cheat" our sense of sight and fake certain visual phenomena which leads to efficient rendering algorithms. In capitalism cheating is demonized and people are brainwashed to take part in cheater witch hunts.

The truth is that cheating is only an issue in a shitty society that's driven by competition. In such society there is a huge motivation for cheating (sometimes literally physical survival) as well as potentially disastrous consequences of it. Under the tyranny of capitalism we are led to worship heroes and high achievers and everyone gets pissed when we get fooled. Corporations go "OH NOES our multi bilion dollar entertainment industry is going to go bankrupt if consoomers get annoyed by cheaters! People are gonna lose their bullshit jobs! Someone is going to get money he doesn't deserve! Our customers may get butthurt!!!" (as if corporations themselves weren't basically just stealing money and raping people lol). So they start a huge brainwashing propaganda campaign, a cheater witch hunt. States do the same, communities do the same, everyone wants to stone cheaters to death but at the same time the society pressures all of us to compete to death with others or else we'll starve. We reward winners and torture the losers, then bash people who try to win -- and no, many times there is no other choice than to cheat, the top of any competition is littered with cheaters, most just don't get caught, so in about 99% of cases the only way to the top is to cheat and try to not get caught, just to have a shot at winning against others. It is proven time after time, legit looking people in the top leagues of sports, business, science and other areas are constantly being revealed as cheaters, usually by pure accident (i.e. the number of actual cheater is MANY times higher). Take a look e.g. at the Trackmania cheating scandal in which after someone invented a replay analysis tool he revealed that a great number or top level players were just cheaters, including possibly the GOAT of Trackmania Riolu (who just ragequit and never showed again lol). Of course famous cases like Neil Armstrong don't even have to be mentioned. Cheater detection systems are (and always will be) imperfect and try to minimize false positives, so only the cheaters who REPEATEDLY make MANY very OBVIOUS mistakes get caught, the smart cheaters stay and take the top places in the competitive system, just as surely as natural selection leads to the evolution of organisms that best adapt to the environment. Even if perfect cheat-detection systems existed, the problem would just shift from cheating to immoral unsportmanship, i.e. abuse of rules that's technically not cheating but effectively presents the same kind of problems. How to solve this enormously disgusting mess? We simply have to stop desperately holding to the system itself, we have to ditch it.

In a good society, such as LRS, cheating is not an issue at all, there's no motivation for it (people don't have to prove their worth by their skills, there are no money, people don't worship heroes, ...) and there are no negative consequences of cheating worse than someone ragequitting an online game -- which really isn't an issue of cheating anyway but simply a consequence of unskilled player facing a skilled one (whether the pro's skill is natural or artificial doesn't play a role, the nub will ragequit anyway). In a good society cheating can become a mild annoyance at worst, and it can really be a positive thing, it can be fun -- seeing for example a skilled pro face and potentially even beat a cheater is a very interesting thing. If someone wants to win by cheating, why not let him? Valid answers to this can only be given in the context of a shit society. In a good society choosing to cheat in a game is as if someone chooses to fly to the top of a mountain by helicopter rather than climbing it -- the choice is everyone's to make.

The fact that cheating isn't really an issue is supported by the hilariously vastly different double standards applied e.g. by chess platforms in this matter, on one hand they state in their TOS they have absolutely 0% tolerance of any kind of cheating/assistance and will lifeban players for the slightest suspicion of cheating yelling "WE HAVE TO FIGHT CHEATING", on the other hand they allow streamers literally cheat on a daily basis on live stream where everyone is seeing it, of course because streamers bring them money -- ALL top chess streamers (chessbrah, Nakamura, ...), including the world champion Magnus Carlsen himself, have videos of themselves getting advice on moves from the chat or even from high level players present during the stream, Magnus Carlsen is filmed taking over his friend's low rated account and winning a game which is the same as if the friend literally just used an engine to win the game, and Magnus is also filmed getting an advice from a top grandmaster on a critical move in a tournament that won him the game and granted him a FINANCIAL PRIZE. World chess champion is literally filmed winning money by cheating and no one cares because it was done as part of a highly lucrative stream "in a fun/friendly mood". Chessbrah streams frequently consist of many people in the room just giving advice on moves to the one who is currently playing, of course they censor all comments that try to bring up the fact that this is 100% cheating directly violating the platform's TOS. People literally have no brains, they only freak out about cheating when they're told to by the industry, when cheating is good for business people are told to shut up because it's okay and indeed they just shut up and keep consuming.



Chess is an old two-player board game, perhaps most famous and popular among all board games in history. It is a complete information game that simulates a battle of two armies on a 8x8 board with different battle pieces. Chess is also called the King's Game, it has a world-wide competitive community and is considered an intellectual sport but is also a topic of active research (as the estimated number of chess games is bigger than googol, it is unlikely to ever be solved) and programming (many chess engines, AIs and frontends are being actively developed).

{ There is a nice black and white indie movie called Computer Chess about chess programmers of the 1980s, it's pretty good, very oldschool, starring real programmers and chess players, check it out. ~drummyfish }

Drummyfish has created a suckless/LRS chess library smallchesslib which includes a simple engine called smolchess.

At LRS we consider chess to be one of the best games for the following reasons:

Chess as a game is not and cannot be copyrighted, but can chess games (moves played in a match) be copyrighted? Thankfully there is a pretty strong consensus and precedence that say this is not the case, even though capitalists try to play the intellectual property card from time to time (e.g. 2016 tournament organizers tried to stop chess websites from broadcasting the match moves under "trade secret protection", unsuccessfully).

Chess In General

Chess evolved from ancient board games in India in about 6th century. Nowadays the game is internationally governed by FIDE which has taken the on role of an authority that defines the official rules: FIDE rules are considered to be the standard chess rules. FIDE also organizes tournaments, promotes the game and keeps a list of registered players whose performance it rates with so called Elo system –⁠ based on the performance it also grants titles such as Grandmaster (GM, strongest), Internation Master (IM, second strongest) or Candidate Master (CM).

A single game of chess is seen as consisting of three stages: opening (starting, theoretical "book" moves, developing pieces), middlegame (seen as the pure core of the game) and endgame (ending in which only relatively few pieces remain on the board). There is no clear border between these stages and they are sometimes defined differently, however each stage plays a bit differently and may require different skills and strategies; for example in the endgame king becomes an active piece while in the opening and middlegame he tries to stay hidden and safe.

The study of chess openings is called opening theory or just theory. Playing the opening stage is special by being based on memorization of this theory, i.e. hundreds or even thousands of existing opening lines that have been studied and analyzed by computers, rather than by performing mental calculation (logical "thinking ahead" present in middlegame and endgame). Some see this as weakness of chess that makes players spend extreme energy on pure memorization. One of the best and most famous players, Bobby Fisher, was of this opinion and has created a chess variant with randomized starting position that prevents such memorization, so called chess 960.

Elo rating is a mathematical system of numerically rating the performance of players (it is used in many sports, not just chess). Given two players with Elo rating it is possible to compute the probability of the game's outcome (e.g. white has 70% chance of winning etc.). The FIDE set the parameters so that the rating is roughly this: < 1000: beginner, 1000-2000: intermediate, 2000-3000: master. More advanced systems have also been created, namely the Glicko system.

The rules of chess are quite simple (easy to learn, hard to master) and can be found anywhere on the Internet. In short, the game is played on a 8x8 board by two players: one with white pieces, one with black. Each piece has a way of moving and capturing (eliminating) enemy pieces, for example bishops move diagonally while pawns move one square forward and take diagonally. The goal is to checkmate the opponent's king, i.e. make the king attacked by a piece while giving him no way to escape this attack. There are also lesser known rules that noobs often miss and ignore, e.g. so called en-passant or the 50 move rule that declares a draw if there has been no significant move for 50 moves.

At the competitive level clock (so called time control) is used to give each player a limited time for making moves: with unlimited move time games would be painfully long and more a test of patience than skill. Clock can also nicely help balance unequal opponent by giving the stronger player less time to move. Based on the amount of time to move there exist several formats, most notably correspondence (slowest, days for a move), classical (slow, hours per game), rapid (faster, tens of minutes per game), blitz (fast, a few seconds per move) and bullet (fastest, units of seconds per move).

Currently the best player in the world is pretty clearly Magnus Carlsen from Norway with Elo rating 2800+.

During covid chess has experienced a small boom among normies and YouTube chess channels have gained considerable popularity. This gave rise to memes such as the bong cloud opening popularized by a top player and streamer Hikaru Nakamura; the bong cloud is an intentionally shitty opening that's supposed to taunt the opponent (it's been even played in serious tournaments lol).

White is generally seen as having a slight advantage because he always has the first move. This doesn't play such as big role in beginner and intermediate games but starts to become apparent in master games. How big the advantages is is a matter of ongoing debate, most people are of the opinion there exists a slight advantage, some people think chess is a win for white with perfect play while others believe chess is a draw with perfect play. Probably only very tiny minority of people think white doesn't have any advantage.

On perfect play: as stated, chess is unlikely to be ever solved so it is unknown if chess is a theoretical forced draw or forced win for white (or even win for black), however many simplified endgames and some simpler chess variants have already been solved. Even if chess was ever solved, it is important to realize one thing: perfect play may be unsuitable for humans and so even if chess was ever solved, it might have no significant effect on the game played by humans. Imagine the following: we have a chess position in which we are deciding between move A and move B. We know that playing A leads to a very good position in which white has great advantage and easy play (many obvious good moves), however if black plays perfectly he can secure a draw here. We also know that if we play B and then play perfectly for the next 100 moves, we will win with mathematical certainty, but if we make just one incorrect move during those 100 moves, we will get to a decisively losing position. While computer will play move B here because it is sure it can play perfectly, it is probably better to play A for human because human is very likely to make mistakes (even a master). For this reason humans may willingly choose to play mathematically worse moves -- it is because a slightly worse move may lead to a safer and more comfortable play for a human.

Chess And Computers

{This is an absolutely amazing video about weird chess algorithms :) ~drummyfish}

Chess are a big topic in computer science and programming, computers not only help people play chess, train their skills, analyze positions and perform research of games, but they also allow mathematical analysis of chess and provide a platform for things such as artificial intelligence.

There is a great online Wiki focused on programming chess engines: https://www.chessprogramming.org.

Chess software is usually separated to libraries, chess engines and frontends (or boards). Chess engine is typically a CLI program capable of playing chess but also doing other things such as evaluating arbitrary position, hinting best moves, saving and loading games etc. Frontends on the other hand are GUI programs that help people interact with the underlying engine.

For communication between different engines and frontends there exist standards such as XBoard (engine protocol), UCI (another engine protocol), FEN (way of encoding a position as a string), PGN (way of encoding games as strings) etc.

Computers have already surpassed the best humans in their playing strength (we can't exactly compute an engine's Elo as it depends on hardware used, but generally the strongest would rate high above 3000 FIDE). As of 2021 the strongest chess engine is considered to be the FOSS engine Stockfish, with other strong engines being e.g. Leela Chess Zero (also FOSS) or AlphaZero (proprietary, by Google). GNU Chess is a pretty strong free software engine by GNU. There are world championships for chess engines such as the Top Chess Engine Championship or World Computer Chess Championship. CCRL is a list of chess engines along with their Elo ratings. Despite the immense strength of modern engines, there are still very specific situation in which humans beat the computer (shown e.g. in this video).

The first chess computer that beat the world champion (at the time Gary Kasparov) was famously Deep Blue in 1997. Alan Turing himself has written a chess playing algorithm but at his time there were no computers to run it, so he executed it by hand -- nowadays the algorithm has been implemented on computers (there are bots playing this algorithm e.g. on lichess).

For online chess there exist many servers such as https://chess.com or https://chess24.com, but for us the most important is https://lichess.org which is gratis and uses FOSS (it also allows users to run bots under special accounts which is an amazing way of testing engines against people and other engines). These servers rate players with Elo/Glicko, allow them to play with each other or against computer, solve puzzles, analyze games, play chess variants, explore opening databases etc.

Playing strength is not the only possible measure of chess engine quality, of course -- for example there are people who try to make the smallest chess programs (see countercomplex and golfing). As of 2022 the leading programmer of smallest chess programs seems to be Óscar Toledo G. (https://nanochess.org/chess.html). Unfortunately his programs are proprietary, even though their source code is public. The programs include Toledo Atomchess (392 x86 instructions), Toledo Nanochess (world's smallest C chess program, 1257 non-blank C characters) and Toledo Javascript chess (world's smallest Javascript chess program). He won the IOCCC. Another small chess program is micro-Max by H. G. Muller (https://home.hccnet.nl/h.g.muller/max-src2.html, 1433 C characters, Toledo claims it is weaker than his program).

{ Nanochess is actually pretty strong, in my testing it easily beat smallchesslib Q_Q ~drummyfish }


Chess stats are pretty interesting.

Number of possible games is not known exactly, Shannon estimated it at 10^120 (lower bound, known as Shannon number). Number of possible games by plies played is 20 after 1, 400 after 2, 8902 after 3, 197281 after 4, 4865609 after 5, and 2015099950053364471960 after 15.

Similarly the number of possibly reachable positions (position for which so called proof game exists) is not known exactly, it is estimated to at least 10^40 and 10^50 at most. Numbers of possible positions by plies is 20 after 1, 400 after 2, 5362 after 3, 72078 after 4, 822518 after 5, and 726155461002 after 11.

Shortest possible checkmate is by black on ply number 4 (so called fool's mate). As of 2022 the longest known forced checkmate is in 549 moves -- it has been discovered when computing the Lomonosov Tablebases.

Average game of chess lasts 40 moves. Average branching factor (number of possible moves at a time) is around 33.

White wins about 38% of games, black wins about 34%, the remaining 28% are draws.

What is the longest possible game? It depends on the exact rules and details we set, for example if a 50 move rule applies, a player MAY claim a draw but also doesn't have to -- but if neither player ever claims a draw, a game can be played infinitely -- so we have to address details such as this. Nevertheless the longest possible chess game upon certain rules has been computed by Tom7 at 17697 half moves in a paper for SIGBOVIK 2020.

What's the most typical game? We can try to construct such a game from a game database by always picking the most common move in given position. Using the lichess database at the time of writing, we get the following incomplete game (the remainder of the game is split between four games, 2 won by white, 1 by black, 1 drawn):

1. e4 e5 2. Nf3 Nc6 3. Bc4 Bc5 4. c3 Nf6 5. d4 exd4 6. cxd4 Bb4+ 7. Nc3 Nxe4 8. O-O Bxc3 9. d5 Bf6 10. Re1 Ne7 11. Rxe4 d6 12. Bg5 Bxg5 13. Nxg5 h6 14. Qe2 hxg5 15. Re1 Be6 16. dxe6 f6 17. Re3 c6 18. Rh3 Rxh3 19. gxh3 g6 20. Qf3 Qa5 21. Rd1 Qf5 22. Qb3 O-O-O 23. Qa3 Qc5 24. Qb3 d5 25. Bf1

You can try to derive your own stats, there are huge free game databases such as the Lichess CC0 database of billions of games from their server.


Besides similar games such as shogi there are many variants of chess, i.e. slight modifications of rules, foremost worth mentioning is for example chess 960. The following is a list of some variants:

Programming Chess

Programming chess is a fun and enriching experience and is therefore recommended as a good exercise. There is nothing more satisfying than writing a custom chess engine and then watching it play on its own.

The core of chess programming is writing the AI, everything else, i.e. implementing the rules, communication protocols etc., is pretty straightforward (but still a good programming exercise). Nevertheless one has to pay a great attention to eliminating as many bugs as possible; really, the importance of writing automatic tests can't be stressed enough as debugging the AI will be hard enough and can become unmanageable with small bugs creeping in.

The AI itself works in almost all cases on the same principle: firstly we implement so called static evaluation function -- a function that takes a chess position and outputs its evaluation number which say how good the position is for white vs black (positive number favoring white, negative black). This function considers a number of factors such as total material of both players, pawn structure, king safety, piece mobility and so on (in new engines this function is often a learned neural network, but it may very well be written by hand). Secondly we implement a search algorithm -- typically some modification of minimax algorithm -- that recursively searches the game tree and looks for a move that will lead to the best result, i.e. to position for which the evaluation function gives the best value. This basic principle, especially the search part, gets very complex as there are many possible weaknesses and optimizations.

Exhaustively searching the tree to great depths is not possible due to astronomical numbers of possible move combinations, so the engine has to limit the depth quite greatly. Normally it will search all moves to a small depth (e.g. 2 or 3 half moves or plys) and then extend the search for interesting moves such as exchanges or checks. Maybe the greatest danger of searching algorithms is so called horizon effect which has to be addressed somehow (e.g. by detecting quiet positions, so called quiescence). If not addressed, the horizon effect will make an engine misevaluate certain moves by stopping the evaluation at certain depth even if the played out situation would continue and lead to a vastly different result (imagine e.g. a queen taking a pawn which is guarded by another pawn; if the engine stops evaluating after the pawn take, it will think it's a won pawn, when in fact it's a lost queen). There are also many techniques for reducing the number of searched tree nodes and speeding up the search, for example pruning methods such as alpha-beta (which subsequently works best with correctly ordering moves to search), or transposition tables (remembering already evaluated position so that they don't have to be evaluated again when encountered by a different path in the tree).

Many other aspects come into the AI design such as opening books (databases of best opening moves), endgame tablebases (databases of winning moves in simple endgames), heuristics in search, clock management, pondering (thinking on opponent's move), learning from played games etc. For details see the above linked chess programming wiki.


The exact rules of chess and their scope may depend on situation, this is just a sum up of rules generally used nowadays.

The start setup of a chessboard is following (lowercase letters are for black pieces, uppercase for white pieces, on a board with colored squares A1 is black):

    /8 |r n b q k b n r|
 r | 7 |p p p p p p p p|
 a | 6 |. . . . . . . .|
 n | 5 |. . . . . . . .|
 k | 4 |. . . . . . . .|
 s | 3 |. . . . . . . .|
   | 2 |P P P P P P P P|
    \1 |R N B Q K B N R|
        A B C D E F G H

Players take turns in making moves, white always starts. A move consists of moving one (or in special cases two) of own pieces from one square to another, possibly capturing (removing from the board) one opponent's piece -- except for a special en passant move capturing always happens by moving one piece to the square occupied by the opposite color piece (which gets removed). Of course no piece can move to a square occupied by another piece of the same color. A move can NOT be skipped. A player wins by giving a checkmate to the opponent (making his king unable to escape attack) or if the opponent resigns. If a player is to move but has no valid moves, the game is a draw, so called stalemate. If neither player has enough pieces to give a checkmate, the game is a draw, so called dead position. There are additional situation in which game can be drawn (threefold repetition of position, 50 move rule). Players can also agree to a draw. A player may also be declared a loser if he cheated, if he lost on time in a game with clock etc.

The individual pieces and their movement rules are:

Check: If the player's king is attacked, i.e. it is immediately possible for an enemy piece to capture the king, the player is said to be in check. A player in check has to make such a move as to not be in check after that move.

A player cannot make a move that would leave him in check!

Castling: If a player hasn't castled yet and his king hasn't been moved yet and his kingside (queenside) rook hasn't been moved yet and there are no pieced between the king and the kingside (queenside) and the king isn't and wouldn't be in check on his square or any square he will pass through or land on during castling, short (long) castling can be performed. In short (long) castling the king moves two squares towards the kingside (queenside) rook and the rook jumps over the king to the square immediately on the other side of the king.

Promotion: If a pawn reaches the 1st or 8th rank, it is promoted, i.e. it has to be switched for either queen, rook, bishop or knight of the same color.

Checkmate: If a player is in check but cannot make any move to get out of it, he is checkmated and lost.

En passant: If a pawn moves 2 squares forward (from the start position), in the immediate next move the opponent can take it with a pawn in the same way as if it only moved 1 square forward (the only case in which a piece captures a piece by landing on an empty square).

Threefold repetition is a rule allowing a player to claim a draw if the same position (piece positions, player's turn, castling rights, en passant state) occurs three times (not necessarily consecutively). The 50 move rule allows a player to claim a draw if no pawn has moved and no piece has been captured in last 50 moves (both players making their move counts as a single move here).

LRS Chess

Chess is only mildly bloated but what if we try to unbloat it completely? Here we propose the LRS version of chess. The rule changes against normal chess are:

See Also


"Cloud Computing"

Cloud is just someone else's computer.

Cloud computing, more accurately known as clown computing, means giving up an autonomous computer by storing one's data as well as running one's programs on someone else's (often a corporation's) computer, known as the cloud, through the Internet, becoming wholly dependent on someone else to which one gives all the power. While the general idea of server computers and remote terminals is not bad in itself and may be utilized in very good ways, the term cloud computing stands for abusing this idea e.g. by capitalists or states to take away autonomous computers from the people as well as to restrict freedoms of people in other ways, for example by pushing DRM, making it impossible to truly own a copy of software or other data, to run computations privately, isolated from the Internet or run non-approved, user-respecting software. Moreover clown computing as applied nowadays is mostly a very bad engineering approach that wastes bandwidth, introduces lag, requires complex and expensive infrastructure etc.

Despite all this "cloud" is the mainstream nowadays, it is the way of computing among normies, even despite regular leaks and losses of their personal data etc., simply because they're constantly being pushed to it by the big tech (Apple, Google, Micro$ost, ...) -- many times they don't even have a choice, they are simply supposed to SHUT UP AND CONSUME. And of course they wouldn't even have an idea about what's going on in the first place, all that matters to a normie is "comfort", "everyone does it", "I just need my TikTok" etc. Zoomers probably aren't even aware of the cloud, they simply have phones with apps that show their photos if Apple approves of it, they don't even care how shit works anymore.

In the future non-cloud computers will most likely become illegal. This will be justified by autonomous computers being "dangerous", only needed by terrorists, pirates and pedophiles. An autonomous computer will be seen as a gun, the right to own it will be greatly limited.



{ We have a C tutorial! ~drummyfish }

C is a low level, structured, statically typed imperative compiled programming language, the go-to language of less retarded programmers. It is the absolutely preferred language of the suckless community as well as of most true experts, for example the Linux and OpenBSD developers, because of its good, relatively simple design, uncontested performance, wide support, great number of compilers, level of control and a greatly established and tested status. C is perhaps the most important language in history, it influenced, to smaller or greater degree, basically all of the widely used languages today such as C++, Java, JavaScript etc., however it is not a thing of the past -- in the area of low level programming C is still the number one unsurpassed language.

{ Look up The Ten Commandments for C Programmers by Henry Spencer. ~drummyfish }

It is usually not considered an easy language to learn because of its low level nature: it requires good understanding of how a computer actually works and doesn't prevent the programmer from shooting himself in the foot. Programmer is given full control (and therefore responsibility). There are things considered "tricky" which one must be aware of, such as undefined behavior of certain operators and raw pointers. This is what can discourage a lot of modern "coding monkeys" from choosing C, but it's also what inevitably allows such great performance -- undefined behavior allows the compiler to choose the most efficient implementation. On the other hand, C as a language is pretty simple without modern bullshit concepts such as OOP, it is not as much hard to learn but rather hard to master, as any other true art.

C is said to be the "platform independent assembly" because of its low level nature, great performance etc. -- though C is structured (has control structures such as branches and loops) and can be used in a relatively high level manner, it is also possible to write assembly-like code that operates directly with bytes in memory through pointers without many safety mechanisms, so C is often used for writing things like hardware drivers. On the other hand some restrain from likening C to assembly because C compilers still perform many transformations of the code and what you write is not necessarily always what you get.

Mainstream consensus acknowledges that C is among the best languages for writing low level code and code that requires performance, such as operating systems, drivers or games. Even scientific libraries with normie-language interfaces -- e.g. various machine learning Python libraries -- usually have the performance critical core written in C. Normies will tell you that for things outside this scope C is not a good language, with which we disagree -- we recommend using C for basically everything that's supposed to last, i.e. if you want to write a good website, you should write it in C etc.

History and Context

C was developed in 1972 at Bell Labs alongside the Unix operating system by Dennis Ritchie and Brian Kerninghan, as a successor to the B language (portable language with recursion) written by Denis Ritchie and Ken Thompson, which was in turn inspired by the the ALGOL language (code blocks, lexical scope, ...).

In 1973 Unix was rewritten in C. In 1978 Keninghan and Ritchie published a book called The C Programming Language, known as K&R, which became something akin the C specification. In 1989, the ANSI C standard, also known as C89, was released by the American ANSI. The same standard was also adopted a year later by the international ISO, so C90 refers to the same language. In 1999 ISO issues a new standard that's known as C99.



C is not a single language, there have been a few standards over the years since its inception in 1970s. The notable standards and versions are:

LRS should use C99 or C89 as the newer versions are considered bloat and don't have such great support in compilers, making them less portable and therefore less free.

The standards of C99 and older are considered pretty future-proof and using them will help your program be future-proof as well. This is to a high degree due to C having been established and tested better than any other language; it is one of the oldest languages and a majority of the most essential software is written in C, C compiler is one of the very first things a new hardware platform needs to implement, so C compilers will always be around, at least for historical reasons. C has also been very well designed in a relatively minimal fashion, before the advent of modern feature-creep and and bullshit such as OOP which cripples almost all "modern" languages.


Standard Library

Besides the pure C language the C standard specifies a set of libraries that have to come with a standard-compliant C implementation -- so called standard library. This includes e.g. the stdio library for performing standard input/output (reading/writing to/from screen/files) or the math library for mathematical functions. It is usually relatively okay to use these libraries as they are required by the standard to exist so the dependency they create is not as dangerous, however many C implementations aren't completely compliant with the standard and may come without the standard library. So for sake of portability it is best if you can avoid using standard library.

The standard library (libc) is a subject of live debate because while its interface and behavior are given by the C standard, its implementation is a matter of each compiler; since the standard library is so commonly used, we should take great care in assuring it's extremely well written. As you probably guessed, the popular implementations (glibc et al) are bloat. Better alternatives thankfully exist, such as:

Bad Things About C

Nothing is perfect, not even C; it was one of the first relatively higher level languages and even though it has showed to have been designed extremely well, some things didn't age great, or were simply bad from the start. We still prefer this language as usually the best choice, but it's good to be aware of its downsides or smaller issues, if only for the sake of one day designing a better language. Keep in mind all here are just suggestions, they made of course be a subject to counter arguments and further discussion. So, let's go:


This is a quick overview, for a more in depth tutorial see C tutorial.

A simple program in C that writes "welcome to C" looks like this:

#include <stdio.h> // standard I/O library

int main(void)
  // this is the main program
  puts("welcome to C");

  return 0; // end with success

You can simply paste this code into a file which you name e.g. program.c, then you can compile the program from command line like this:

gcc -o program program.c

Then if you run the program from command line (./program on Unix like systems) you should see the message.


It's pretty important you learn C, so here's a little cheat sheet for you.

data types (just some):

branching aka if-then-else:

  // do something here
else // optional
  // do something else here

for loop (repeat given number of times):

for (int i = 0; i < MAX; ++i)
  // do something here, you can use i

while loop (repeat while CONDITION holds):

  // do something here

do while loop (same as while but CONDITION at the end):

  // do something here
} while (CONDITION);

function definition:

RETURN_TYPE myFunction (TYPE1 param1, TYPE2 param2, ...)
{ // return type can be void
  // do something here

See Also


Code of Conduct

Code of conduct (COC), also code of coercion, is a shitty invention of SJW fascists that dictates how development of specific software should be conducted, generally pushing toxic woke concepts such as forced inclusivity or use of politically correct language. COC is typically placed in the software repository as a CODE_OF_CONDUCT file. In practice COCs are used to kick people out of development because of their political opinions expressed anywhere, inside or outside the project, and to push political opinions through software projects.

LRS must never include any COC, with possible exceptions of anti-COC (such as NO COC) or parody style COCs, not because we dislike genuine inclusivity, but because we believe COCs are bullshit and mostly harmful as they support bullying, censorship and exclusion of people.

Anyway it's best to avoid any kind of COC file in the repository, it just takes up space and doesn't serve anything. We may simply ignore this shitty concept completely. You may argue why we don't ignore e.g. copyright in the same way and just not use any licenses? The situation with copyright is different: it exists by default, without a license file the code is proprietary and our neighbors don't have the legal safety to execute basic freedoms, they may be bullied by the state -- for this we are forced to include a license file to get rid of copyright. With COC there simply isn't any such implicit issues to be solved (because COCs are simply inventing their own issues), so we just don't try to solve non-issues.



Coding nowadays means low quality attempt at programming, usually practiced by soydevs and barely qualified coding monkeys.

Traditionally it means encoding and decoding of information as in e.g. video coding -- this is the only non-gay meaning of the word



Collapse of our civilization is a concerning scenario in which basic structures of society relatively rapidly fall apart and cause unusually large, possibly world-wide horrors such as chaos, wars, famine and loss of advanced technology. It is something that will very likely happen very soon due to uncontrolled growth and societal decline by capitalism: we, the LRS, are especially focusing on a very probable technological collapse (caused by badly designed technology as well as its wrong application and extreme overuse causing dangerous dependencies) but of course clues point to collapse are coming from many directions (ecological, economical, political, natural disasters such as a coronal mass ejection etc.). Some have said that a society can deal with one crisis, but if multiple crises hit at once this hit may be fatal; however the dependence of current society on computer technology is so great that its collapse could be enough to deliver a fatal blow alone. Recently (around 2015) there has even appeared a specific term collapsology referring to the study of the potential collapse.

There is a reddit community for discussing the collapse at https://reddit.net/r/collapse. WikiWikiWeb has a related discussion under ExtinctionOfHumanity.

Collapse of civilizations has been a repeated theme throughout history, it is nothing new or exceptional, see e.g. Maya empire collapse, Bronze age collapse, the fall of Rome etc. It usually comes when a civilization reaches high complexity and becomes "spoiled", morally corrupt and socially divided -- just what we are seeing today.

In technological world a lot of people are concerned with the collapse, notable the collapse OS, an operating system meant to run on simple hardware after the technological supply chain collapses and renders development of modern computers impossible. They believe the collapse will happen before 2030. The chip shortage, financial, climatic and energetic crisis and beginning of war in Europe as of early 2020s are one of the first warnings showing how fragile the systems really is.

Ted Kaczynski (a famous primitivist mathematician that committed mass murderer to warn about the decline of society due to complex technology) has seen the collapse as a possible option. Internet bloggers/vloggers such as Luke Smith and no phone man advocate (and practice) simple, independent off-grid living, possibly to be prepared for such an event. Even proprietary normies like Jonathan Blow warn of a coming disaster (in his talk Preventing the Collapse of Civilization). Viznut is another programmer warning about the collapse.

The details of the collapse cannot of course be predicted exactly -- it may come in a relatively quick, violent form (e.g. in case of a disaster causing a blackout) or as a more agonizing slow death. CollapseOS site talks about two stages of the slow collapse: the first one after the collapse of the supply chain. i.e. when the production of modern computers halts, and the second (decades after) when the last modern computer stops working. It most likely won't happen overnight -- that's a very extreme case. A typical collapse may take decades during which all aspects of society see a rapid decline. Of course, a collapse doesn't mean extinction of humans either, just deaths of many and great losses of what has been achieved culturally and technologically.

{ I've read a book called Blackout by Marc Elsberg whose story revolves around a fictional large collapse of power supply in Europe. A book called The World Without Us explores what the world would look like if humans suddenly disappeared. ~drummyfish }

Late 2022 Report

It seems like the collapse may have already begun. After the worldwide Covid pandemic the Russia-Ukraine war has begun with talks of nuclear war already going on. A great economic crisis has begun, possibly as a result of the pandemic and the war, inflation is skyrocketing and breaking all records, especially gas and energy prices are growing to extremes and as a result basically prices of everything go up as well. Russia isolated itself, new cold war has begun. Many big banks have gone bankrupt. War immigrants from Ukraine are flooding into Europe and European fascists/nationalists seem to be losing their patience about it. People in European first world countries are now actually concerned about how not to freeze during the winter, this talk is all over TV and radio. The climate disaster has also started to show, e.g. in Czech Republic there was the greatest forest fire in its history as well an extremely hot summer, even tornados that destroyed some villages (tornados in this part of world are basically unheard of), winters have almost no snow unlike some two decades ago. Everything is shitty, food costs more and is of much lower quality as basically everything else, newly bought technology cannot be expected to last longer than a few months. Society is spoiled to an unimaginable level, extreme hostility, competition and aggressive commerce is everywhere, kids are addicted to cellphones and toxic social media, mental health of population rapidly deteriorates. Art such as movies and music is of extremely low quality, people hate every single new movie or video game that comes out. A neofascist party has won elections in Italy, in Czech Republic all socialist parties were eliminated from the parliament: only capitalists rule now -- all social securities are being cancelled, people are getting poorer and poorer and forced to work more and to much higher ages. Ads are everywhere and equate psychological torture. The situation now definitely seems extremely bad.

See Also


Collision Detection

Collision detection is an essential problem e.g. of simulating physics of mechanical bodies in physics engines (but also elsewhere), it tries to detect whether (and also how) geometric shapes overlap. Here we'll be talking about the collision detection in physics engines, but the problem appears in other contexts too (e.g. frustum culling in computer graphics). Collision detection potentially leads to so called collision resolution, a different stage that tries to deal with the detected collision (separate the bodies, update their velocities, make them "bounce off"). Physics engines are mostly divided into 2D and 3D ones so we also normally either talk about 2D or 3D collision detection (3D being, of course, a bit more complex).

There are two main types of collision detection:

Collision detection is non-trivial because we need to detect not only the presence of the collision but also its parameters which are typically the exact point of collision, collision depth and collision normal -- these are needed for subsequently resolving the collision (typically the bodies will be shifted along the normal by the collision depth to become separated and impulses will be applied at the collision point to update their velocities). We also need to detect general cases, i.e. collisions of whole volumes (imagine e.g. a tiny cuboid inside an arbitrarily rotated bigger cone). This is very hard and/or expensive for some complex shapes such as general 3D triangle meshes (which is why we approximate them with simpler shapes). We also want the detection algorithm to be at least reasonably fast -- for this reason collision detection mostly happens in two phases:

In many cases it is also important to correctly detect the order of collisions -- it may well happen a body collides not with one but with multiple bodies at the time of collision detection and the computed behavior may vary widely depending on the order in which we consider them. Imagine that body A is colliding with body B and body C at the same time; in real life A may have first collided with B and be deflected so that it would have never hit C, or the other way around, or it might have collided with both. In continuous collision detection we know the order as we also have exact time coordinate of each collision (even though the detection itself is still computed at discrete time steps), i.e. we know which one happened first. With discrete collisions we may use heuristics such as the direction in which the bodies are moving, but this may fail in certain cases (consider e.g. collisions due to rotations).

On shapes: general rule is that mathematically simpler shapes are better for collision detection. Spheres (or circles in 2D) are the best, they are stupidly simple -- a collision of two spheres is simply decided by their distance (i.e. whether the distance of their center points is less that the sum of the radia of the spheres), which also determines the collision depth, and the collision normal is always aligned with the vector pointing from one sphere center to the other. So if you can, use spheres -- it is even worth using multiple spheres to approximate more complex shapes if possible. Capsules ("extruded spheres"), infinite planes, half-planes, infinite cylinders (distance from a line) and axis-aligned boxes are also pretty simple. Cylinders and cuboids with arbitrary rotation are bit harder. Triangle meshes (the shape most commonly used for real-time 3D models) are very difficult but may be approximated e.g. by a convex hull which is manageable (a convex hull is an intersection of a number of half-spaces) -- if we really want to precisely collide full 3D meshes, we may split each one into several convex hulls (but we need to write the non-trivial splitting algorithm of course). Also note that you need to write a detection algorithm for any possible pair of shape types you want to support, so for N supported shapes you'll need N * (N + 1) / 2 detection algorithms.

{ In theory we may in some cases also think about using iterative/numerical methods to find collisions, i.e. starting at some point between the bodies and somehow stepping towards their intersection until we're close enough. Another idea I had was to use signed distance functions for representing static environments, I kind of implemented it in tinyphysicsengine. ~drummyfish }

TODO: some actual algorithms



Collision, sometimes also conflict, happens when two or more things want to occupy the same spot. This situation usually needs to be addressed somehow; then we talk about collision resolution. In programming there are different types of collisions, for example:



Combinatorics is an area of math that's basically concerned with counting possibilities. As such it is very related to probability theory (as probability is typically defined in terms of ratios of possible outcomes). It explores things such as permutations and combinations, i.e. question such as how many ways are there to order N objects or how many ways are there to choose k objects from a set of N objects.

The two basic quantities we define in combinatorics are permutations and combinations.

Permutation (in a simple form) of a set of objects (lets say A, B and C) is one possible ordering of such set (i.e. ABC, ACB, BAC etc.). I.e. here by permutation of a number n, which we'll write as P(n), we mean the number of possible orderings of a set of size n. So for example P(1) = 1 because there is only one way to order a set containing one item. Similarly P(3) = 6 because there are six ways to order a set of three objects (ABC, ACB, BAC, BCA, CAB, CBA). P(n) is computed very simply, it is factorial of n, i.e. P(n) = n!.

Combination (without repetition) of a set of objects says in how many ways we can select given number of objects from that set (e.g. if there are 4 shirts in a drawer and we want to choose 2, how many possibilities are there?). I.e. given a set of certain size a combination tells us the number of possible subsets of certain size. I.e. there are two parameters of a combination, one is the size of the set, n, and the other is the number of items (the size of the subset) we want to select from that set, k. This is written as nCk, C(n,k) or

 / n \
|     |
 \ k /

A combination is computed as C(n,k) = n! / (k! * (n - k)!). E.g. having a drawer with 4 shirts (A, B, C and D) and wanting to select 2 gives us C(4,2) = 4! / (2! * (4 - 2)!) = 6 possibilities (AB, AC, AD, BC, BD, CD).

Furthermore we can define combinations with repetitions in which we allow ourselves to select the same item from the set more than once (note that the selection order still doesn't matter). I.e. while combinations without repetition give us the number of possible subsets, a combinations WITH repetitions gives us the number of possible multisubsets of a given set. Combinations with repetition is computed as Cr(n,k) = C(n + k - 1,k). E.g. having a drawer with 4 shirts and wanting to select 2 WITH the possibility to choose one shirt multiple times gives us Cr(4,2) = C(5,2) = 5! / (2! * (5 - 2)!) = 10 possibilities (AA, AB, AC, AD, BB, BC, BD, CC, CD, DD).

Furthermore if we take combinations and say that order matters, we get generalized permutations that also take two parameters, n and k, and there are two kinds: without and with repetitions. I.e. permutations without repetitions tell us in how many ways we can choose k items from n items when ORDER MATTERS, and is computed as P(n,k) = n!/(n - k)! (e.g. P(4,2) = 4!/(4 - 2)! = 12, AB, AC, AD, BA, BC, BD, CA, CB, CD, DA, DB, DC). Permutations with repetitions tell us the same thing but we are allowed to select the same thing multiple times, it is computed as Pr(n,k) = n^k (e.g. P(4,2) = 4^2 = 16, AA, AB, AC, AD, BA, BB, BC, BD, CA, CB, CC, CD, DA, DB, DC, DD).

To sum up:

quantity order matters? repetition allowed? formula
permutation (simple) yes P(n) = n!
permutation without rep. yes no P(n,k) = n!/(n - k)!
permutation with rep. yes yes Pr(n,k) = n^k
combination without rep. no no C(n,k) = n! / (k! * (n - k)!)
combination with rep. no yes Cr(n,k) = C(n + k - 1,k)

Here is an example of applying all the measures to a three item set ABC (note that selecting nothing from a set counts as 1 possibility, NOT 0):

quantity possibilities (for set ABC) count
P(3,0) 3!/(3 - 0)! = 1
P(3,1) A B C 3!/(3 - 1)! = 3
P(3,2) AB AC BA BC CA CB 3!/(3 - 2)! = 6
P(3,3) ABC ACB BAC BCA CAB CBA 3!/(3 - 3)! = 6
Pr(3,0) 3^0 = 1
Pr(3,1) A B C 3^1 = 3
Pr(3,2) AA AB AC BA BB BC CA CB CC 3^2 = 9
Pr(3,3) AAA AAB AAC ABA ABB ABC ACA ACB ACC ... 3^3 = 27
C(3,0) 3!/(0! * (3 - 0)!) = 1
C(3,1) A B C 3!/(1! * (3 - 1)!) = 3
C(3,2) AB AC BC 3!/(2! * (3 - 2)!) = 3
C(3,3) ABC 3!/(3! * (3 - 3)!) = 1
Cr(3,0) C(3 + 0 - 1,0) = 1
Cr(3,1) A B C C(3 + 1 - 1,1) = 3
Cr(3,2) AA AB AC BB BC CC C(3 + 2 - 1,2) = 6
Cr(3,3) AAA AAB AAC ABB ABC ACC BBB BBC BCC CCC C(3 + 3 - 1,3) = 10



Comment is part of computer code that doesn't affect how the code is interpreted by the computer and is intended to hold information for humans that read the code (even though comments can sometimes contain additional information for computers such as metadata and autodocumentation information). There are comments in basically all programming languages, they usually start with //, #, /* and similar symbols, sometimes parts of code that don't fit the language syntax are ignored and as such can be used for comments.

Even though you should write nice, self documenting code, you should comment you source code as well. General tips on commenting:



Competition is a situation of conflict in which several entities try to overpower or otherwise win over each other. It is the opposite of collaboration. Competition is connected to pursuing self interest.

Competition is the absolute root cause of all evil in society. Society must never be based on competition. Unfortunately our society has decided to do the exact opposite with capitalism, the glorification of competition -- this will very likely lead to the destructing of our society, possibly even to the destruction of all life.

Competition is to society what a drug is to an individual: competition makes a situation become better quickly and start achieving technological "progress" but for the price of things going downwards from then on, competition quickly degenerates and kills other values in society such as altruism and morality; society that decides to make unnaturally fast "progress" and base itself on competition is equivalent to someone deciding to take steroids to grow muscles quickly -- corporations that arise in technologically advanced society take over the world just like muscle cancer that grows from taking steroids. A little bit of competition can be helpful in small doses just as painkillers can on occasion help lower suffering of an individual, but one has to be extremely careful to not take too many of them... even smoking a joint from time to time can have a positive effect, however with capitalism our society has become someone who has started to take heroin and only live for that drug alone, take as much of it as he can. Invention of bullshit jobs just to keep competition running, extreme growing hostility of people, productivity cults, overworking, wage slavery, extreme waste that's destroying our environment, all of these are signs our society is dying from overdose, living from day to day, trying to get a few bucks for the next dose of its drug.

Is all competition bad? Competition is not bad as a concept, it may for example be used in genetic programming to evolve good computer programs. People also have a NEED for at least a bit of competition as this need was necessary to survive in the past -- this need has to be satisfied, so we create artificial, mostly harmless competition e.g. with games and sports. This kind of competition is not so bad as long as we are aware of the dangers of overapplying it. What IS bad is making competition the basis of a society, in a good society people must never compete for basic needs such as food, shelter or health care. Furthermore after sufficient technological progress, competition is no longer just a bad basis for society, it becomes a fatal one because society gains means for complete annihilation of all life such as nuclear weapons or factories poisoning our environment that in the heat of competition will sooner or later destroy the society. I.e. in a technologically advanced society it is necessary to give up competition so as to prevent own destruction.

Why is competition so prevalent if it is so bad? Because it is natural and it has been with us since we as life arised. It is extremely hard to let go of such a basic instinct but it has to be done not only because competition has become obsolete and is now only artificially sustaining suffering without bringing in any benefits (we, humans, have basically already won the evolution), but because, as has been said, sustaining competition is now fatal.

How to achieve letting go of competition in society? The only way is a voluntary choice achieved through our intellect, i.e. through education. Competition is something we naturally want to do, but we can rationally decide not to do it once we see and understand it is bad -- such behavior is already occurring, for example if we know someone is infected with a sexually transmitting disease, we rationally overcome the strong natural instinct to have sex with him.


Computer Science

Computer science, abbreviated as "compsci", is (surprise-surprise) a science studying computers. The term is pretty wide, a lot of it covers very formal and theoretical areas that neighbor and overlap with mathematics, such as formal languages, cryptography and machine learning, but also more practical/applied and "softer" disciplines such as software_engineering, programming hardware, computer networks or even user interface design. This science deals with such things as algorithms, data structures, artificial intelligence and information theory. The field has become quite popular and rapidly growing after the coming of the 21st century computer/Internet revolution and it has also become quite spoiled and abused by its sudden lucrativity.


Notable fields of computer science include:

Computer science also figures in interdisciplinary endeavors such as bioinformatics and robotics.

In the industry there have arisen fields of art and study that probably shouldn't be included in computer science itself, but are very close to it. These may include e.g. web design (well, let's include it for the sake of completeness), game design, system administration etc.



The word computer can be defined in many ways and can also take many different meanings; a somewhat common definition may be this: computer is a machine that automatically performs mathematical computations. We can also see it as a machine for processing information or, very generally, as any tool that helps computation, in which case one's fingers or even a mathematical formula itself can be considered a computer. Here we are of course mostly concerned with electronic digital computers.

We can divide computers based on many attributes, e.g.:

Computers are theoretically studied by computer science. The kind of computer we normally talk about consists of two main parts:

The power of computers is limited, Alan Turing mathematically proved that there exist problems that can never be completely solved by any algorithm, i.e. there are problems a computer (including our brain) will never be able to solve (even if solution exists). This is related to the fact that the power of mathematics itself is limited in a similar way (see Godel's theorems). Turing also invented the theoretical model of a computer called the Turing machine. Besides the mentioned theoretical limitation, many solvable problems may take too long to compute, at least with computers we currently know (see computational complexity and P vs NP).

Typical Computer

Computers we normally talk about in daily conversations are electronic digital mostly personal computers such as desktops and laptops, possibly also cell phones, tablets etc.

Such a computer consists of some kind of case (chassis), internal hardware plus peripheral devices that serve for input and output -- these are for example a keyboard and mouse (input devices), a monitor (output device) or harddisk (input/output device). The internals of the computer normally include:



Copyleft (also share-alike) is a concept of sharing something on the condition that others will share it under the same terms; this is practically always used by a subset of free (as in freedom) software and culture to legally ensure this software/art and its modifications will always remain free. This kind of hacks copyright to de-facto remove copyright by its own power.

Copyleft has been by its mechanisms likened to a virus because once it is applied to certain software, it "infects" it and will force its conditions on any descendants of that software, i.e. it will spread itself (in this case the word virus does not bear a negative connotation, at least to some, they see it as a good virus).

For free/open-source software the alternative to copyleft is so called permissive licensing which (same as with copyleft) grants all the necessary freedom rights, but does NOT require modified versions to grant these rights as well. This allows free software being forked and developed into proprietary software and is what copyleft proponents criticize.

In the FOSS world there is a huge battle between the copyleft camp and permissive camp (LRS advocates permissive licenses with a preference for 100% public domain).

Issues With Copyleft

In the great debate of copyleft vs permissive free licenses we, as technological anarchists, stand on the permissive side. Here are some reasons for why we reject copyleft:



Copyright (better called copyrestriction or copywrong) is one of many types of so called "intellectual property" (IP), a legal concept that allows "ownership", i.e. restriction, censorship and artificial monopoly on certain kinds of information, for example prohibition of sharing or viewing useful information or improving art works. Copyright specifically allows the copyright holder (not necessarily the author) a monopoly (practically absolute power) over art creations such as images, songs or texts, which also include source code of computer programs. Copyright is a capitalist mechanism for creating artificial scarcity, enabling censorship and elimination of the public domain (a pool of freely shared works that anyone can use and benefit from). Copyright is not to be confused with trademarks, patents and other kinds of "intellectual property", which are similarly harmful but legally different. Copyright is symbolized by C in a circle or in brackets: (C), which is often accompanies by the phrase "all rights reserved".

When someone creates something that can even remotely be considered artistic expression (even such things as e.g. a mere collection of already existing things), he automatically gains copyright on it, without having to register it, pay any tax, announce it or let it be known anywhere in any way. He then practically has a full control over the work and can successfully sue anyone who basically just touches the work in any way (even unknowingly and unintentionally). Therefore any work (such as computer code) without a free license attached is implicitly fully "owned" by its creator (so called "all rights reserved") and can't be used by anyone without permission. It is said that copyright can't apply to ideas (ideas are covered by patents), only to expressions of ideas, however that's bullshit, the line isn't clear and is arbitrarily drawn by judges; for example regarding stories in books it's been established that the story itself can be copyrighted, not just its expression (e.g. you can't rewrite the Harry Potter story in different words and start selling it).

As if copyright wasn't bad enough of a cancer, there usually exist extra oppressive copyright-like restrictions called related rights or neighboring rights such as "moral rights", "personal rights" etc. Such "rights" differ a lot by country and can be used to restrict and censor even copyright-free works. This is a stuff that makes you want to commit suicide. Waivers such as CC0 try to waive copyright as well as neighboring rights (to what extent neighboring rights can be waived is debatable though).

The current extreme form of copyright (as well as other types of IP such as software patents) has been highly criticized by many people, even those whom it's supposed to "protect" (small game creators, musicians etc.). Strong copyright laws basically benefit mainly corporations and "trolls" on the detriment of everyone else. It smothers creativity and efficiency by prohibiting people to reuse, remix and improve already existing works -- something that's crucial for art, science, education and generally just making any kind of progress. Most people are probably for some form of copyright but still oppose the current extreme form which is pretty crazy: copyright applies to everything without any registration or notice and last usually 70 years (!!!) AFTER the author has died (!!!) and is already rotting in the ground. This is 100 years in some countries. In some countries it is not even possible to waive copyright to own creations -- just think about what kind of twisted society we are living in when it PROHIBITS people from making a selfless donation of their own creations to others. Some people, including us, are against the very idea of copyright (those may either use waivers such as CC0 or unlicense or protest by not using any licenses and simply ignoring copyright which however will actually discourage other people from reusing their works). Though copyright was originally intended to ensure artists can make living with their works, it has now become the tool of states and corporations for universal censorship, control, bullying, surveillance, creating scarcity and bullshit jobs; states can use copyright to for example take down old politically inconvenient books shared on the Internet even if such takedowns do absolute not serve protection of anyone's living but purely political interests.

Prominent critics of copyright include Lawrence Lessig (who established free culture and Creative Commons as a response), Nina Paley and Richard Stallman. There are many movements and groups opposing copyright or its current form, most notably e.g. the free culture movement, free software movement, Creative Commons etc.

The book Free Culture by Lessig talks, besides others, about how copyright has started and how it's been shaped by corporations to becoming their tool for monopolizing art. The concept of copyright has appeared after the invention of printing press. The so called Statute of Anne of 1710 allowed the authors of books to control their copying for 14 years and only after registartion. The term could be prolonged by anothert 14 years if the author survived. The laws started to get more and more strict as control of information became more valued and eventually the term grew to life of author plus 70 years, without any need for registration or deposit of the copy of the work. Furthermore with new technologies, the scope of copyright has also extended: if copyright originally only limited copying of books, in the Internet age it started to cover basically any use, as any manipulation of digital data in the computer age requires making local copies. Additionally the copyright laws were passing despite being unconstitutional as the US constitution says that copyright term has to be finite -- the corporations have found a way around this and simply regularly increased the copyright's term, trying to make it de-facto infinite (technically not infinite but ever increasing). Their reason, of course, was to firstly forever keep ownership of their own art but also, maybe more importantly, to kill the public domain, i.e. prevent old works from entering the public domain where they would become a completely free, unrestricted work for all people, competing with their proprietary art (who would pay for movies if there were thousands of movies available for free?). Nowadays, with coprporations such as YouTube and Facebook de-facto controlling most of infromation sharing among common people, the situation worsens further: they can simply make their own laws that don't need to be passed by the government but simply implemented on the platform they control. This way they are already killing e.g. the right to fair use, they can simply remove any content on the basis of "copyright violation", even if such content would normally NOT violate copyright because it would fall under fair use. This would normally have to be decided by court, but a corporation here itself takes the role of the court. So in terms of copyright, corporations have now a greater say than governments, and of course they'll use this power against the people (e.g. to implement censorship and surveillance).

Copyright rules differ greatly by country, most notably the US measures copyright length from the publication of the work rather than from when the author died. It is possible for a work to be copyrighted in one country and not copyrighted in another. It is sometimes also very difficult to say whether a work is copyrighted because the rules have been greatly changing (e.g. a notice used to be required for some time), sometimes even retroactively copyrighting public domain works, and there also exists no official database of copyrighted works (you can't safely look up whether your creation is too similar to someone else's). All in all, copyright is a huge mess, which is why we choose free licenses and even public domain waivers.

Copyleft (also share-alike) is a concept standing against copyright, a kind of anti-copyright, invented by Richard Stallman in the context of free software. It's a license that grants people the rights to the author's work on the condition that they share its further modification under the same terms, which basically hacks copyright to effectively spread free works like a "virus".

Copyright does not (or at least should not) apply to facts (including mathematical formulas) (even though the formulation of them may be copyrighted), ideas (though these may be covered by patents) and single words or short phrases (these may however still be trademarked) and similarly trivial works. As such copyright can't e.g. be applied to game mechanics of a computer game (it's an idea). It is also basically proven that copyright doesn't cover computer languages (Oracle vs Google). Also even though many try to claim so, copyright does NOT arise for the effort needed to create the work -- so called "sweat of the brow" -- some say that when it took a great effort to create something, the author should get a copyright on it, however this is NOT and must NOT be the case (otherwise it would be possible to copyright mere ideas, simple mathematical formulas, rules of games etc.). Depending on time and location there also exist various peculiar exceptions such as the freedom of panorama for photographs or uncopyrightable utilitarian design (e.g. no one can own the shape of a generic car). But it's never good to rely on these peculiarities as they are specific to time/location, they are often highly subjective, fuzzy and debatable and may even be retroactively changed by law. This constitutes a huge legal bloat and many time legal unsafety. Do not stay in the gray area, try to stay safely far away from the fuzzy copyright line.

A work which is not covered by copyright (and any other IP) -- which is nowadays pretty rare due to the extent and duration of copyright -- is in the public domain.

Free software (and free art etc.) is not automatically public domain, it is mostly still copyrighted, i.e. "owned" by someone, but the owner has given some key rights to everyone with a free software license and by doing so minimized or even eliminated the negative effects of full copyright. The owner may still keep the rights e.g. to being properly credited in all copies of the software, which he may enforce in court. Similarly software that is in public domain is not automatically free software -- this holds only if source code for this software is available (so that the rights to studying and modifying can be executed).

See Also



Corporation is basically a huge company that doesn't have a single owner but is rather managed by many shareholders. Corporations are one of the most powerful, dangerous and unethical entities that ever came into existence -- their power is growing, sometimes even beyond the power of states and their sole goal is to make as much profit as possible without any sense of morality. Existence of corporations is enabled by capitalism.

The most basic fact to know about corporations is that 100% of everything a corporation ever does is done 100% solely for maximizing its own benefit for any cost, with no other reason, with 0 morality and without any consideration of consequences. If a corporation could make 1 cent by raping 1000000000 children and get away with it, it would do it immediately without any hesitation and any regret. This is very important to keep in mind. Now try to not get depressed at realization that corporations are those to whom we gave power and who are in almost absolute control of the world.

Corporation is not a human, it has no emotion and absolutely 0 sense of morality. The most basic error committed by retards is to reply to this argument with "but corporations are run by humans". This is an extremely dangerous argument because somehow 99.999999999999999999% people believe this could be true and accept it as a comforting argument so that they can continue their daily lives and do absolutely nothing about the disastrous state of society. The argument is of course completely false for a number of reasons: firstly corporations exclusively hire psychopaths for manager roles -- any corporation that doesn't do this will be eliminated by natural selection of the market environment because it will be weaker in a fight against other corporations, and its place will be taken by the next aspiring corporation waiting in line. Secondly corporations are highly sophisticated machines that have strong mechanisms preventing any ethical behavior -- for example division of labor in the "just doing my job"/"everyone does it" style allows for many people collaborating on something extremely harmful and unethical without any single one feeling responsibility for the whole, or sometimes without people even knowing what they are really collaborating on. This is taken to perfection by corporations not even having a single responsible owner -- there is a group of shareholders, none of whom has a sole responsibility, and there is the CEO who is just a tool and puppet with tied hands who is just supposed to implement the collective bidding of shareholders. Of course, most just don't care, and most don't even have a choice. Similar principles allowed for example the Holocaust to happen. Anyone who has ever worked anywhere knows that managers always pressure workers just to make money, not to behave more ethically -- of course, such a manager would be fired on spot -- and indeed, workers that try to behave ethically are replaced by those who make more money, just as companies that try to behave ethically in the market are replaced by those that rather make money, i.e. corporations. This is nothing surprising, the definition of capitalism implies existence of a system with Darwinian evolution that selects entities that are best at making money for any cost, and that is exactly what we are getting. To expect any other outcome in capitalism would be just trying to deny mathematics itself.

A corporation is made to exploit people just as a gun is made to kill people. When a corporation commits a crime, it is not punished like a human would be, the corporation is left to exist and continue doing what it has been doing -- a supposed "punishment" for a corporation that has been caught red handed committing a crime is usually just replacing whoever is ruled to be "responsible", for example the CEO, which is of course ridiculous, the guy is just replaced with someone else who will do exactly the same. This is like trying to fix the lethal nature of a weapon by putting all the blame on a screw in the weapon, then replacing the screw with another one and expecting the weapon to no longer serve killing people.

There is probably nothing we can do to stop corporations from taking over the world and eventually eliminating humans, we have probably passed the capitalist singularity.



C Pitfalls

C is a powerful language that offers almost absolute control and maximum performance which necessarily comes with responsibility and danger of shooting oneself in the foot. Without knowledge of the pitfalls you may well find yourself fallen into one of them.

Unless specified otherwise, this article supposes the C99 standard of the C language.

Generally: be sure to check your programs with tools such as valgrind, splint or cppcheck, and turn on compiler auto checks (-Wall, -Wextra, -pedantic, ...), it's quick, simple and reveals many bugs!

Undefined/Unspecified Behavior

Undefined (completely unpredictable), unspecified (safe but potentially differing) and implementation-defined (consistent within implementation but potentially differing between them) behavior poses a kind of unpredictability and sometimes non-intuitive, tricky behavior of certain operations that may differ between compilers, platforms or runs because they are not exactly described by the language specification; this is mostly done on purpose so as to allow some implementation freedom which allows implementing the language in a way that is most efficient on given platform. One has to be very careful about not letting such behavior break the program on platforms different from the one the program is developed on. Note that tools such as cppcheck can help find undefined behavior in code. Description of some such behavior follows.

Data type sizes including int and char may not be the same on each platform. Even though we almost take it for granted that char is 8 bits wide, in theory it can be different (even though sizeof(char) is always 1). Int (and unsigned int) type width should reflect the architecture's native integer type, so nowadays it's mostly 32 or 64 bits. To deal with these differences we can use the standard library limits.h and stdint.h headers.

No specific endianness or even encoding of numbers is specified. Nowadays little endian and two's complement is what you'll encounter on most platforms, but e.g. PowerPC uses big endian ordering.

Order of evaluation of operands and function arguments is not specified. I.e. in an expression or function call it is not defined which operands or arguments will be evaluated first, the order may be completely random and the order may differ even when evaluating the same expression at another time. This is demonstrated by the following code:

#include <stdio.h>

int x = 0;

int a(void)
  x += 1;
  return x;

int main(void)
  printf("%d %d\n",x,a()); // may print 0 1 or 1 1
  return 0;

Overflow behavior of signed type operations is not specified. Sometimes we suppose that e.g. addition of two signed integers that are past the data type's limit will produce two's complement overflow (wrap around), but in fact this operation's behavior is undefined, C99 doesn't say what representation should be used for numbers. For portability, predictability and preventing bugs it is safer to use unsigned types (but safety may come at the cost of performance, i.e. you prevent compiler from performing some optimizations based on undefined behavior).

Bit shifts by type width or more are undefined. Also bit shifts by negative values are undefined. So e.g. x >> 8 is undefined if width of the data type of x is 8 bits or fewer.

Char data type signedness is not defined. The signedness can be explicitly "forced" by specifying signed char or unsigned char.

Floating point results are not precisely specified, no representation (such as IEEE 754) is specified and there may appear small differences in float operations under different machines or e.g. compiler optimization settings -- this may lead to nondeterminism.

Memory Unsafety

Besides being extra careful about writing memory safe code, one needs to also know that some functions of the standard library are memory unsafe. This is regarding mainly string functions such as strcpy or strlen which do not check the string boundaries (i.e. they rely on not being passed a string that's not zero terminated and so can potentially touch memory anywhere beyond); safer alternatives are available, they have an n added in the name (strncpy, strnlen, ...) and allow specifying a length limit.

Different Behavior Between C And C++ (And Different C Standards)

C is not a subset of C++, i.e. not every C program is a C++ program (for simple example imagine a C program in which we use the word class as an identifier: it is a valid C program but not a C++ program). Furthermore a C program that is at the same time also a C++ program may behave differently when compiled as C vs C++, i.e. there may be a semantic difference. Of course, all of this may also apply between different standards of C, not just between C and C++.

For portability sake it is good to try to write C code that will also compile as C++ (and behave the same). For this we should know some basic differences in behavior between C and C++.

One difference lies for example in pointers to string literals. While in C it is possible to have non-const pointers such as

char *s = "abc";

C++ requires any such pointer to be const, i.e.:

const char *s = "abc";

TODO: more examples

Compiler Optimizations

C compilers perform automatic optimizations and other transformations of the code, especially when you tell it to optimize aggressively (-O3) which is a standard practice to make programs run faster. However this makes compilers perform a lot of magic and may lead to unexpected and unintuitive undesired behavior such as bugs or even the "unoptimization of code". { I've seen a code I've written have bigger size when I set the -Os flag (optimize for smaller size). ~drummyfish }

Aggressive optimization may firstly lead to tiny bugs in your code manifesting in very weird ways, it may happen that a line of code somewhere which may somehow trigger some tricky undefined behavior may cause your program to crash in some completely different place. Compilers exploit undefined behavior to make all kinds of big brain reasoning and when they see code that MAY lead to undefined behavior a lot of chain reasoning may lead to very weird compiled results. Remember that undefined behavior, such as overflow when adding signed integers, doesn't mean the result is undefined, it means that ANYTHING CAN HAPPEN, the program may just start printing nonsensical stuff on its own or your computer may explode. So it may happen that the line with undefined behavior will behave as you expect but somewhere later on the program will just shit itself. For these reasons if you encounter a very weird bug, try to disable optimizations and see if it goes away -- if it does, you may be dealing with this kind of stuff. Also check your program with tools like cppcheck.

Automatic optimizations may also be dangerous when writing multithreaded or very low level code (e.g. a driver) in which the compiler may have wrong assumptions about the code such as that nothing outside your program can change your program's memory. Consider e.g. the following code:

while (x)
  puts("X is set!");

Normally the compiler could optimize this to:

if (x)
  while (1)
    puts("X is set!");

As in typical code this works the same and is faster. However if the variable x is part of shared memory and can be changed by an outside process during the execution of the loop, this optimization can no longer be done as it results in different behavior. This can be prevented with the volatile keyword which tells the compiler to not perform such optimizations.

Of course this applies to other languages as well, but C is especially known for having a lot of undefined behavior, so be careful.


Watch out for operator precedence! Bracket expressions if unsure, or just to increase readability for others.

Also watch out for this one: != is not =! :) I.e. if (x != 4) and if (x =! 4) are two different things, the first means not equal and is usually what you want, the latter is two operations, = and !, the tricky thing is it also compiles and may work as expected in some cases but fail in others, leading to a very nasty bug.



C++ is an object-obsessed joke language based on C to which it adds only capitalist features and bloat, most notably object obsession. Most good programmers such as Richard Stallman and Linus Torvalds agree that C++ is hilariously messy and also tragic in that it actually succeeded to become mainstream. The language creator Bjarne Stroustrup himself infamously admitted the language suck but laughs at its critics because it became successful anyway -- indeed, in a retarded society only shit can succeed. As someone once said, "C++ is not an increment, it is excrement".



Crackers are either "bad hackers" that break into computer systems or the good people who with the power of hacking remove artificial barriers to obtaining and sharing infomration; for example they help remove DRM from games or leak data from secret databases. This is normally illegal which makes the effort even more admirable.

Cracker is also food.

Cracker is also the equivalent of nigger-word for the white people.



See cracker.


Creative Commons



Creative commons licenses/waivers form a spectrum spanning from complete freedom (CC0, public domain, no conditions on use) to complete fascism (prohibiting basically everything except for non-commercial sharing). This means that NOT all Creative Commons licenses are free cultural licenses -- this is acknowledged by Creative Commons and part of the design. Keep in mind that as a good human you mustn't ever use licenses with NC (non-commercial use only) or ND (no derivatives allowed) clauses, these make your work non-free and therefore unusable.

Here is a comparison of the Creative Commons licenses/waivers, from most free to least free:

name abbreviation free culture use share remix copyleft attribution non-commercial comment
Creative Commons Zero CC0 yes :) yes :) yes :) yes :) no :) no need :) no :) public domain, copyright waiver, no restrictions, most freedom, best, sadly doesn't waive patents and trademraks
Creative Commons Attribution CC BY yes :) yes :) yes :) yes :) no :) forced :( no :) no restrictions except for requiring attribution to authors
Creative Commons Sharealike CC SA yes :) yes :) yes :) yes :) yes :/ no need :) no :) retired, secret license, no longer recommended by CC, pure copyleft/sharealike without forced attribution
Creative Commons Attribution Sharealike CC BY-SA yes :) yes :) yes :) yes :) yes :/ forced :( no :) requires attribution to authors and copyleft (sharing under same terms)
Creative Commons Attribution NonCommercial CC BY-NC NO! :((( yes but yes but yes but yes :/ forced :( yes :( proprietary fascist license prohibiting commercial use, DO NOT USE
Creative Commons Attribution NoDerivs CC BY-ND NO! :((( yes but yes but NO! :( yes :/ forced :( no but proprietary fascist license prohibiting modifications, DO NOT USE
Creative Commons Attribution NonCommercial NoDerivs CC BY-NC-ND NO! :((( yes but yes but NO! :( yes :/ forced :( yes :( proprietary fascist license prohibiting commercial use and even modifications, DO NOT USE
none (all rights reserved) NO! :((( NO! :( NO! :( NO! :( FUCK YOU FUCK YOU FUCK YOU proprietary fascist option, prohibits everything, DO NOT USE


Crime Against Economy

Crime against economy refers to any bullshit "crime" invented by capitalism that is deemed to "hurt economy", the new God of society. In the current dystopian society where money has replaced God, worshiping economy is the new religion; to satisfy economy human and animal lives are sacrificed just as such sacrifices used to be made to please the gods of ancient times.

Examples of crimes against economy include:


Crow Funding

Crow funding is when a crow pays for your program.

You probably misspelled crowd funding.



Cryptocurrency, or just crypto, is a digital, virtual (non-physical) currency used on the Internet which uses cryptographic methods (electronic signatures etc.) to implement a decentralized system in which there is no authority to control the currency (unlike e.g. with traditional currencies that are controlled by the state or systems of digital payments controlled by the banks that run these systems). Cryptocurrencies use so called blockchain as an underlying technology and are practically always implemented as free and open-source software. Example of cryptocurrencies are Bitcoin, Monero or Dogecoin.

The word crypto in crpytocurrency doesn't imply that the currency provides or protects privacy -- it rather refers to the cryptographic algorithms used to make the currency work -- even though thanks to the decentralization, anonymity and openness cryptocurrencies actually are mostly privacy friendly (up to the points of being considered the currency of criminals).

LRS sees cryptocurrencies as not only unnecessary bullshit, but downright as an unethical technology because money itself is unethical, plus the currencies based on proof of work waste not only human effort but also enormous amount of electricity and computing power that could be spent in a better way. Keep in mind that cryptocurrencies are part of cryptofascism; they're a way of digitizing harmful concepts existing in society. Crypto is just an immensely expensive game in which people try to fuck each other over money that have been stolen from the people.



How It Works

Cryptocurrency is build on top of so called blockchain -- a kind structure that holds records of transactions (exchanges of money or "coins", as called in the crypto world). Blockchain is a data structure serving as a database of the system. As its name suggests, it consists of blocks. Each block contains various data, most important of which are performed transactions (e.g. "A sent 1 coin to B"), and each block points to a previous one (forming a linked list). As new transactions are made, new blocks are created and appended at the end of the blockchain.

But where is the blockchain stored? It is not on a single computer; many computers participating in the system have their own copy of the blockchain and they share it together (similarly to how people share files via torrents).

But how do we know which one is the "official" blockchain? Can't just people start forging information in the blockchain and then distribute the fake blockchains? Isn't there a chaos if there are so many copies? Well yes, it would be messy -- that's why we need a consensus of the participants on which blockchain is the real one. And there are a few algorithms to ensure the consensus. Basically people can't just spam add new blocks, a new block to be added needs to be validated via some process (which depends on the specific algorithm) in order to be accepted by others. Two main algorithms for this are:

Can't people just forge transactions, e.g. by sending out a record that says someone else sent them money? This can be easily prevented by digitally signing the transactions, i.e. if there is e.g. a transaction "A sends 1 coint to B", it has to be signed by A to confirm that A really intended to send the money. But can't someone just copy-paste someone else's already signed transactions and try to perform them multiple times? This can also be prevented by e.g. numbering the transactions, i.e. recording something like "A sent 1 coin to B as his 1st transaction".

But where are one's coins actually stored? They're not explicitly stored anywhere; the amount of coins any participant has is deduced from the list of transactions, i.e. if it is known someone joined the network with 0 coins and there is a record of someone else sending him 1 coin, it is clear he now has 1 coin. For end users there are so called wallets which to them appear to store their coins, but a wallet is in fact just the set of cryptographic keys needed to perform transactions.

But why is blockchain even needed? Can't we just have a list of signed transactions without any blocks? Well, blockchain is designed to ensure coherency and the above mentioned consensus.


C Sharp

C# is a supposed to be "programming language" but it's just some capitalist shit by Micro$oft that's supposed to give it some kind of monopoly. Really it's not even worth writing about. It's like Java but worse. I'm tired, DO NOT USE THIS PSEUDOSHIT. Learn C.


C Tutorial

{ Still a work in progress, but 99% complete. ~drummyfish }

This is a relatively quick C tutorial.

You should probably know at least the completely basic ideas of programming before reading this (what's a programming language, source code, command line etc.). If you're as far as already knowing another language, this should be pretty easy to understand.

About C And Programming

C is

If you come from a language like Python or JavaScript, you may be shocked that C doesn't come with its own package manager, debugger or build system, it doesn't have modules, generics, garabage collection, OOP, hashmaps, dynamic lists, type inference and similar "modern" featured. When you truly get into C, you'll find it's a good thing.

Programming in C works like this:

  1. You write a C source code into a file.
  2. You compile the file with a C compiler such as gcc (which is just a program that turns source code into a runnable program). This gives you the executable program.
  3. You run the program, test it, see how it works and potentially get back to modifying the source code (step 1).

So, for writing the source code you'll need a text editor; any plain text editor will do but you should use some that can highlight C syntax -- this helps very much when programming and is practically a necessity. Ideal editor is vim but it's a bit difficult to learn so you can use something as simple as Gedit or Geany. We do NOT recommend using huge programming IDEs such as "VS Code" and whatnot. You definitely can NOT use an advanced document editor that works with rich text such as LibreOffice or that shit from Micro$oft, this won't work because it's not plain text.

Next you'll need a C compiler, the program that will turn your source code into a runnable program. We'll use the most commonly used one called gcc (you can try different ones such as clang or tcc if you want). If you're on a Unix-like system such as GNU/Linux (which you probably should), gcc is probably already installed. Open up a terminal and write gcc to see if it's installed -- if not, then install it (e.g. with sudo apt install build-essential if you're on a Debian-based system).

If you're extremely lazy, there are online web C compilers that work in a web browser (find them with a search engine). You can use these for quick experiments but note there are some limitations (e.g. not being able to work with files), and you should definitely know how to compile programs yourself.

Last thing: there are multiple standards of C. Here we will be covering C99, but this likely doesn't have to bother you at this point.

First Program

Let's quickly try to compile a tiny program to test everything and see how everything works in practice.

Open your text editor and paste this code:

/* simple C program! */

#include <stdio.h> // include IO library

int main(void)
  puts("It works.");
  return 0;

Save this file and name it program.c. Then open a terminal emulator (or an equivalent command line interface), locate yourself into the directory where you saved the file (e.g. cd somedirectory) and compile the program with the following command:

gcc -o program program.c

The program should compile and the executable program should appear in the directory. You can run it with


And you should see

It works.

written in the command line.

Now let's see what the source code means:

Also notice how the source code is formatted, e.g. the indentation of code withing the { and } brackets. White characters (spaces, new lines, tabs) are ignored by the compiler so we can theoretically write our program on a single line, but that would be unreadable. We use indentation, spaces and empty lines to format the code to be well readable.

To sum up let's see a general structure of a typical C program. You can just copy paste this for any new program and then just start writing commands in the main function.

#include <stdio.h> // include the I/O library
// more libraries can be included here

int main(void)
  // write commands here
  return 0; // always the last command

Variables, Arithmetic, Data Types

Programming is a lot like mathematics, we compute equations and transform numerical values into other values. You probably know in mathematics we use variables such as x or y to denote numerical values that can change (hence variables). In programming we also use variables -- here variable is a place in memory which has a name (and in this place there will be stored a value that can change over time).

We can create variables named x, y, myVariable or score and then store specific values (for now let's only consider numbers) into them. We can read from and write to these variables at any time. These variables physically reside in RAM, but we don't really care where exactly (at which address) they are located -- this is e.g. similar to houses, in common talk we normally say things like John's house or the pet store instead of house with address 3225.

Variable names can't start with a digit (and they can't be any of the keywords reserved by C). By convention they also shouldn't be all uppercase or start with uppercase (these are normally used for other things). Normally we name variables like this: myVariable or my_variable (pick one style, don't mix them).

In C as in other languages each variable has a certain data type; that is each variable has associated an information of what kind of data is stored in it. This can be e.g. a whole number, fraction, a text character, text string etc. Data types are a more complex topic that will be discussed later, for now we'll start with the most basic one, the integer type, in C called int. An int variable can store whole numbers in the range of at least -32768 to 32767 (but usually much more).

Let's see an example.

#include <stdio.h>

int main(void)
  int myVariable;
  myVariable = 5;
  myVariable = 8;

After compiling and running of the program you should see:


Last thing to learn is arithmetic operators. They're just normal math operators such as +, - and /. You can use these along with brackets (( and )) to create expressions. Expressions can contain variables and can themselves be used in many places where variables can be used (but not everywhere, e.g. on the left side of variable assignment, that would make no sense). E.g.:

#include <stdio.h>

int main(void)
  int heightCm = 175;
  int weightKg = 75;
  int bmi = (weightKg * 10000) / (heightCm * heightCm);


calculates and prints your BMI (body mass index).

Let's quickly mention how you can read and write values in C so that you can begin to experiment with your own small programs. You don't have to understand the following syntax as of yet, it will be explained later, now simply copy-paste the commands:

Branches And Loops (If, While, For)

When creating algorithms, it's not enough to just write linear sequences of commands. Two things (called control structures) are very important to have in addition:

Let's start with branches. In C the command for a branch is if. E.g.:

if (x > 10)
  puts("X is greater than 10.");

The syntax is given, we start with if, then brackets (( and )) follow inside which there is a condition, then a command or a block of multiple commands (inside { and }) follow. If the condition in brackets holds, the command (or block of commands) gets executed, otherwise it is skipped.

Optionally there may be an else branch which is gets executed only if the condition does NOT hold. It is denoted with the else keyword which is again followed by a command or a block of multiple commands. Branching may also be nested, i.e. branches may be inside other branches. For example:

if (x > 10)
  puts("X is greater than 10.");
  puts("X is not greater than 10.");

  if (x < 5)
    puts("And it is also smaller than 5.");

So if x is equal e.g. 3, the output will be:

X is not greater than 10.
And it is also smaller than 5.

About conditions in C: a condition is just an expression (variables/functions along with arithmetic operators). The expression is evaluated (computed) and the number that is obtained is interpreted as true or false like this: in C 0 means false, anything else means true. Even comparison operators like < and > are technically arithmetic, they compare numbers and yield either 1 or 0. Some operators commonly used in conditions are:

E.g. an if statement starting as if (x == 5 || x == 10) will be true if x is either 5 or 10.

Next we have loops. There are multiple kinds of loops even though in theory it is enough to only have one kind of loop (there are multiple types out of convenience). The loops in C are:

The while loop is used when we want to repeat something without knowing in advance how many times we'll repeat it (e.g. searching a word in text). It starts with the while keyword, is followed by brackets with a condition inside (same as with branches) and finally a command or a block of commands to be looped. For instance:

while (x > y) // as long as x is greater than y
  printf("%d %d\n",x,y); // prints x and y  

  x = x - 1; // decrease x by 1
  y = y * 2; // double y

puts("The loop ended.");

If x and y were to be equal 100 and 20 (respectively) before the loop is encountered, the output would be:

100 20
99 40
98 60
97 80
The loop ended.

The for loop is executed a fixed number of time, i.e. we use it when we know in advance how many time we want to repeat our commands. The syntax is a bit more complicated: it starts with the keywords for, then brackets (( and )) follow and then the command or a block of commands to be looped. The inside of the brackets consists of an initialization, condition and action separated by semicolon (;) -- don't worry, it is enough to just remember the structure. A for loop may look like this:

puts("Counting until 5...");

for (int i = 0; i < 5; ++i)
  printf("%d\n",i); // prints i

int i = 0 creates a new temporary variable named i (name normally used by convention) which is used as a counter, i.e. this variable starts at 0 and increases with each iteration (cycle), and it can be used inside the loop body (the repeated commands). i < 5 says the loop continues to repeat as long as i is smaller than 5 and ++i says that i is to be increased by 1 after each iteration (++i is basically just a shorthand for i = i + 1). The above code outputs:

Counting until 5...

IMPORTANT NOTE: in programming we count from 0, not from 1 (this is convenient e.g. in regards to pointers). So if we count to 5, we get 0, 1, 2, 3, 4. This is why i starts with value 0 and the end condition is i < 5 (not i <= 5).

Generally if we want to repeat the for loop N times, the format is for (int i = 0; i < N; ++i).

Any loop can be exited at any time with a special command called break. This is often used with so called infinite loop, a while loop that has 1 as a condition; recall that 1 means true, i.e. the loop condition always holds and the loop never ends. break allows us to place conditions in the middle of the loop and into multiple places. E.g.:

while (1) // infinite loop
  x = x - 1;
  if (x == 0)
    break; // this exits the loop!
  y = y / x;

The code above places a condition in the middle of an infinite loop to prevent division by zero in y = y / x.

Again, loops can be nested (we may have loops inside loops) and also loops can contain branches and vice versa.

Simple Game: Guess A Number

With what we've learned so far we can already make a simple game: guess a number. The computer thinks a random number in range 0 to 9 and the user has to guess it. The source code is following.

#include <stdio.h>
#include <stdlib.h>
#include <time.h>

int main(void)
  srand(clock()); // random seed
  while (1) // infinite loop
    int randomNumber = rand() % 10;
    puts("I think a number. What is it?");
    int guess;
    scanf("%d",&guess); // read the guess

    if (guess == randomNumber)
      puts("You guessed it!");
      printf("Wrong. The number was %d.\n",randomNumber);
    puts("Play on? [y/n]");
    char answer;

    scanf("%c",&answer); // read the answer
    if (answer == 'n')

  return 0; // return success, always here

Functions (Subprograms)

Functions are extremely important, no program besides the most primitive ones can be made without them (well, in theory any program can be created without functions, but in practice such programs would be extremely complicated and unreadable).

Function is a subprogram (in other languages functions are also called procedures or subroutines), i.e. it is code that solves some smaller subproblem that you can repeatedly invoke, for instance you may have a function for computing a square root, for encrypting data or for playing a sound from speakers. We have already met functions such as puts, printf or rand.

Functions are similar to but NOT the same as mathematical functions. Mathematical function (simply put) takes a number as input and outputs another number computed from the input number, and this output number depends only on the input number and nothing else. C functions can do this too but they can also do additional things such as modify variables in other parts of the program or make the computer do something (such as play a sound or display something on the screen) -- these are called side effects; things done besides computing an output number from an input number. For distinction mathematical functions are called pure functions and functions with side effects are called non-pure.

Why are function so important? Firstly they help us divide a big problem into small subproblems and make the code better organized and readable, but mainly they help us respect the DRY (Don't Repeat Yourself) principle -- this is extremely important in programming. Imagine you need to solve a quadratic equation in several parts of your program; you do NOT want to solve it in each place separately, you want to make a function that solves a quadratic equation and then only invoke (call) that function anywhere you need to solve your quadratic equation. This firstly saves space (source code will be shorter and compiled program will be smaller), but it also makes your program manageable and eliminates bugs -- imagine you find a better (e.g. faster) way to solving quadratic equations; without functions you'd have to go through the whole code and change the algorithm in each place separately which is impractical and increases the chance of making errors. With functions you only change the code in one place (in the function) and in any place where your code invokes (calls) this function the new better and updated version of the function will be used.

Besides writing programs that can be directly executed programmers write libraries -- collections of functions that can be used in other projects. We have already seen libraries such as stdio, standard input/output library, a standard (official, bundled with every C compiler) library for input/output (reading and printing values); stdio contains functions such as puts which is used to printing out text strings. Examples of other libraries are the standard math library containing function for e.g. computing sine, or SDL, a 3rd party multimedia library for such things as drawing to screen, playing sounds and handling keyboard and mouse input.

Let's see a simple example of a function that writes out a temperature in degrees of Celsius as well as in Kelvin:

#include <stdio.h>

void writeTemperature(int celsius)
  int kelvin = celsius + 273;
  printf("%d C (%d K)\n",celsius,kelvin);

int main(void)

  return 0;

The output is

-50 C (223 K)
0 C (273 K)
100 C (373 K)

Now imagine we decide we also want our temperatures in Fahrenheit. We can simply edit the code in writeTemperature function and the program will automatically be writing temperatures in the new way.

Let's see how to create and invoke functions. Creating a function in code is done between inclusion of libraries and the main function, and we formally call this defining a function. The function definition format is following:


Let's see another function:

#include <stdio.h>

int power(int x, int n)
  int result = 1;
  for (int i = 0; i < n; ++i) // repeat n times
    result = result * x;
  return result;

int main(void)
  for (int i = 0; i < 5; ++i)
    int powerOfTwo = power(2,i);

  return 0;

The output is:


The function power takes two parameters: x and n, and returns x raised to the ns power. Note that unlike the first function we saw here the return type is int because this function does return a value. Notice the command return -- it is a special command that causes the function to terminate and return a specific value. In functions that return a value (their return type is not void) there has to be a return command. In function that return nothing there may or may not be one, and if there is, it has no value after it (return;);

Let's focus on how we invoke the function -- in programming we say we call the function. The function call in our code is power(2,i). If a function returns a value (return type is not void), its function call can be used in any expression, i.e. almost anywhere where we can use a variable or a numerical value -- just imagine the function computes a return value and this value is substituted to the place where we call the function. For example we can imagine the expression power(3,1) + power(3,0) as simply 3 + 1.

If a function returns nothing (return type is void), it can't be used in expressions, it is used "by itself"; e.g. playBeep();. (Function that do return a value can also be used like this -- their return value is in this case simply ignored.)

We call a function by writing its name (power), then adding brackets (( and )) and inside them we put arguments -- specific values that will substitute the corresponding parameters inside the function (here x will take the value 2 and n will take the current value of i). If the function takes no parameters (the function parameter list is void), we simply put nothing inside the brackets (e.g. playBeep(););

Here comes the nice thing: we can nest function calls. For example we can write x = power(3,power(2,1)); which will result in assigning the variable x the value of 9. Functions can also call other functions (even themselves, see recursion), but only those that have been defined before them in the source code (this can be fixed with so called forward declarations).

Notice that the main function we always have in our programs is also a function definition. The definition of this function is required for runnable programs, its name has to be main and it has to return int (an error code where 0 means no error). It can also take parameters but more on that later.

These is the most basic knowledge to have about C functions. Let's see one more example with some pecularities that aren't so important now, but will be later.

#include <stdio.h>

void writeFactors(int x) // writes divisord of x
  printf("factors of %d:\n",x);
  while (x > 1) // keep dividing x by its factors
    for (int i = 2; i <= x; ++i) // search for a factor
      if (x % i == 0) // i divides x without remainder?
        printf("  %d\n",i); // i is a factor, write it
        x = x / i; // divide x by i
        break; // exit the for loop

int readNumber(void)
  int number;
  puts("Please enter a number to factor (0 to quit).");
  return number;

int main(void)
  while (1) // infinite loop
    int number = readNumber(); // <- function call

    if (number == 0) // 0 means quit
    writeFactors(number); // <- function call
  return 0;

We have defined two functions: writeFactors and readNumber. writeFactors return no values but it has side effects (print text to the command line). readNumber takes no parameters but return a value; it prompts the user to enter a value and returns the read value.

Notice that inside writeFactors we modify its parameter x inside the function body -- this is okay, it won't affect the argument that was passed to this function (the number variable inside the main function won't change after this function call). x can be seen as a local variable of the function, i.e. a variable that's created inside this function and can only be used inside it -- when writeFactors is called inside main, a new local variable x is created inside writeFactors and the value of number is copied to it.

Another local variable is number -- it is a local variable both in main and in readNumber. Even though the names are the same, these are two different variables, each one is local to its respective function (modifying number inside readNumber won't affect number inside main and vice versa).

And a last thing: keep in mind that not every command you write in C program is a function call. E.g. control structures (if, while, ...) and special commands (return, break, ...) are not function calls.

More Details (Globals, Switch, Float, Forward Decls, ...)

We've skipped a lot of details and small tricks for simplicity. Let's go over some of them. Many of the following things are so called syntactic sugar: convenient syntax shorthands for common operations.

Multiple variables can be defined and assigned like this:

int x = 1, y = 2, z;

The meaning should be clear, but let's mention that z doesn't generally have a defined value here -- it will have a value but you don't know what it is (this may differ between different computers and platforms). See undefined behavior.

The following is a shorthand for using operators:

x += 1;      // same as: x = x + 1;
x -= 10;     // same as: x = x - 1;
x *= x + 1;  // same as: x = x * (x + 1);
x++;         // same as: x = x + 1;
x--;         // same as: x = x - 1;
// etc.

The last two constructs are called incrementing and decrementing. This just means adding/subtracting 1.

In C there is a pretty unique operator called the ternary operator (ternary for having three operands). It can be used in expressions just as any other operators such as + or -. Its format is:


It evaluates the CONDITION and if it's true (non-0), this whole expression will have the value of VALUE1, otherwise its value will be VALUE2. It allows for not using so many ifs. For example instead of

if (x >= 10)
  x -= 10;
  x = 10;

we can write

x = x >= 10 ? x - 10 : 10;

Global variables: we can create variables even outside function bodies. Recall than variables inside functions are called local; variables outside functions are called global -- they can basically be accessed from anywhere and can sometimes be useful. For example:

#include <stdio.h>
#include <stdlib.h> // for rand()

int money = 0; // total money, global variable

void printMoney(void)
  printf("I currently have $%d.\n",money);

void playLottery(void)
  puts("I'm playing lottery.");
  money -= 10; // price of lottery ticket
  if (rand() % 5) // 1 in 5 chance
    money += 100;
    puts("I've won!");
    puts("I've lost!");


void work(void)
  puts("I'm going to work :(");
  money += 200; // salary


int main()
  return 0;

In C programs you may encounter a switch statement -- it is a control structure similar to a branch if which can have more than two branches. It looks like this:

  switch (x)
    case 0: puts("X is zero. Don't divide by it."); break;
    case 69: puts("X is 69, haha."); break;
    case 42: puts("X is 42, the answer to everything."); break;
    default: printf("I don't know anything about X."); break;

Switch can only compare exact values, it can't e.g. check if a value is greater than something. Each branch starts with the keyword case, then the match value follows, then there is a colon (:) and the branch commands follow. IMPORTANT: there has to be the break; statement at the end of each case branch (we won't go into details). A special branch is the one starting with the word default that is executed if no case label was matched.

Let's also mention some additional data types we can use in programs:

Here is a short example with the new data types:

#include <stdio.h>

int main(void)
  char c;
  float f;
  puts("Enter character.");
  c = getchar(); // read character
  puts("Enter float.");
  printf("Your character is :%c.\n",c);
  printf("Your float is %lf\n",f);
  float fSquared = f * f;
  int wholePart = f; // this can be done
  printf("It's square is %lf.\n",fSquared);
  printf("It's whole part is %d.\n",wholePart);
  return 0;

Notice mainly how we can assign a float value into the variable of int type (int wholePart = f;). This can be done even the other way around and with many other types. C can do automatic type conversions (casting), but of course, some information may be lost in this process (e.g. the fractional part).

In the section about functions we said a function can only call a function that has been defined before it in the source code -- this is because the compiler read the file from start to finish and if you call a function that hasn't been defined yet, it simply doesn't know what to call. But sometimes we need to call a function that will be defined later, e.g. in cases where two functions call each other (function A calls function B in its code but function B also calls function A). For this there exist so called forward declaractions -- a forward declaration is informing that a function of certain name (and with certain parameters etc.) will be defined later in the code. Forward declaration look the same as a function definition, but it doesn't have a body (the part between { and }), instead it is terminated with a semicolon (;). Here is an example:

#include <stdio.h>

void printDecorated2(int x, int fancy); // forward declaration

void printDecorated1(int x, int fancy)
  if (fancy)
    printDecorated2(x,0); // would be error without f. decl. 

void printDecorated2(int x, int fancy)
  if (fancy)

int main()
  putchar('\n'); // newline

which prints


The functions printDecorated1 and printDecorated2 call each other, so this is the case when we have to use a forward declaration of printDecorated2. Also note the condition if (fancy) which is the same thing as if (fancy != 0) (imagine fancy being 1 and 0 and about what the condition evaluates to in each case).

Header Files, Libraries, Compilation/Building

So far we've only been writing programs into a single source code file (such as program.c). More complicated programs consist of multiple files and libraries -- we'll take a look at this now.

In C we normally deal with two types of source code files:

When we have multiple source code files, we typically have pairs of .c and .h files. E.g. if there is a library called mathfunctions, it will consist of files mathfunctions.c and mathfunctions.h. The .h file will contain the function headers (in the same manner as with forward declarations) and constants such as pi. The .c file will then contain the implementations of all the functions declared in the .h file. But why do we do this?

Firstly .h files may serve as a nice documentation of the library for programmers: you can simply open the .h file and see all the functions the library offers without having to skim over thousands of lines of code. Secondly this is for how multiple source code files are compiled into a single executable program.

Suppose now we're compiling a single file named program.c as we've been doing until now. The compilation consists of several steps:

  1. The compiler reads the file program.c and makes sense of it.
  2. It then creates an intermediate file called program.o. This is called an object file and is a binary compiled file which however cannot yet be run because it is not linked -- in this code all memory addresses are relative and it doesn't yet contain the code from external libraries (e.g. the code of printf).
  3. The compiler then runs a linker which takes the file program.o and the object files of libraries (such as the stdio library) and it puts them all together into the final executable file called program. This is called linking; the code from the libraries is copied to complete the code of our program and the memory addresses are settled to some specific values.

So realize that when the compiler is compiling our program (program.c), which contains function such as printf from a separate library, it doesn't have the code of these functions available -- this code is not in our file. Recall that if we want to call a function, it must have been defined before and so in order for us to be able to call printf, the compiler must know about it. This is why we include the stdio library at the top of our source code with #include <stdio.h> -- this basically copy-pastes the content of the header file of the stdio library to the top of our source code file. In this header there are forward declarations of functions such as printf, so the compiler now knows about them (it knows their name, what they return and what parameters they take) and we can call them.

Let's see a small example. We'll have the following files (all in the same directory).

library.h (the header file):

// Returns the square of n.
int square(int n);

library.c (the implementation file):

int square(int x)
  // function implementation
  return x * x;

program.c (main program):

#include <stdio.h>
#include "library.h"

int main(void)
  int n = square(5);


  return 0;

Now we will manually compile the library and the final program. First let's compile the library, in command line run:

gcc -c -o library.o library.c

The -c flag tells the compiler to only compile the file, i.e. only generate the object (.o) file without trying to link it. After this command a file library.o should appear. Next we compile the main program in the same way:

gcc -c -o program.o program.c

This will generate the file program.o. Note that during this process the compiler is working only with the program.c file, it doesn't know the code of the function square, but it knows this function exists, what it returns and what parameter it has thanks to us including the library header library.h with #include "library.h" (quotes are used instead of < and > to tell the compiler to look for the files in the current directory).

Now we have the file program.o in which the compiled main function resides and file library.o in which the compiled function square resides. We need to link them together. This is done like this:

gcc -o program program.o library.o

For linking we don't need to use any special flag, the compiler knows that if we give it several .o files, it is supposed to link them. The file program should appear that we can already run and it should print


This is the principle of compiling multiple C files (and it also allows for combining C with other languages). This process is normally automated, but you should know how it works. The systems that automate this action are called build systems, they are for example Make and Cmake. When using e.g. the Make system, the whole codebase can be built with a single command make in the command line.

Some programmers simplify this whole process further so that they don't even need a build system, e.g. with so called header-only libraries, but this is outside the scope of this tutorial.

As a bonus, let's see a few useful compiler flags:

Advanced Data Types And Variables (Structs, Arrays, Strings)

Until now we've encountered simple data types such as int, char or float. These identify values which can take single atomic values (e.g. numbers or text characters). Such data types are called primitive types.

Above these there exist compound data types (also complex or structured) which are composed of multiple primitive types. They are necessary any advanced program.

The first compound type is a structure, or struct. It is a collection of several values of potentially different data types (primitive or compound). The following code shows how a struc can be created and used.

#include <stdio.h>

typedef struct
  char initial; // initial of name
  int weightKg;
  int heightCm;
} Human;

int bmi(Human human)
  return (human.weightKg * 10000) / (human.heightCm * human.heightCm);

int main(void)
  Human carl;
  carl.initial = 'C';
  carl.weightKg = 100;
  carl.heightCm = 180;
  if (bmi(carl) > 25)
    puts("Carl is fat.");
  return 0;

The part of the code starting with typedef struct creates a new data type that we call Human (one convention for data type names is to start them with an uppercase character). This data type is a structure consisting of three members, one of type char and two of type int. Inside the main function we create a variable carl which is of Human data type. Then we set the specific values -- we see that each member of the struct can be accessed using the dot character (.), e.g. carl.weightKg; this can be used just as any other variable. Then we see the type Human being used in the parameter list of the function bmi, just as any other type would be used.

What is this good for? Why don't we just create global variables such as carl_initial, carl_weightKg and carl_heightCm? In this simple case it might work just as well, but in a more complex code this would be burdening -- imagine we wanted to create 10 variables of type Human (john, becky, arnold, ...). We would have to painstakingly create 30 variables (3 for each person), the function bmi would have to take two parameters (height and weight) instead of one (human) and if we wanted to e.g. add more information about every human (such as hairLength), we would have to manually create another 10 variables and add one parameter to the function bmi, while with a struct we only add one member to the struct definition and create more variables of type Human.

Structs can be nested. So you may see things such as myHouse.groundFloor.livingRoom.ceilingHeight in C code.

Another extremely important compound type is array -- a sequence of items, all of which are of the same data type. Each array is specified with its length (number of items) and the data type of the items. We can have, for instance, an array of 10 ints, or an array of 235 Humans. The important thing is that we can index the array, i.e. we access the individual items of the array by their position, and this position can be specified with a variable. This allows for looping over array items and performing certain operations on each item. Demonstration code follows:

#include <stdio.h>
#include <math.h> // for sqrt()

int main(void)
  float vector[5];
  vector[0] = 1;
  vector[1] = 2.5;
  vector[2] = 0;
  vector[3] = 1.1;
  vector[4] = -405.054; 
  puts("The vector is:");
  for (int i = 0; i < 5; ++i)
    printf("%lf ",vector[i]);
  putchar('\n'); // newline
  /* compute vector length with
     pythagoren theorem: */
  float sum = 0;
  for (int i = 0; i < 5; ++i)
    sum += vector[i] * vector[i];
  printf("Vector length is: %lf\n",sqrt(sum));
  return 0;

We've included a new library called math.h so that we can use a function for square root (sqrt). (If you have trouble compiling the code, add -lm flag to the compile command.)

float vector[5]; is a declaration of an array of length 5 whose items are of type float. When compiler sees this, it creates a continuous area in memory long enough to store 5 numbers of float type, the numbers will reside here one after another.

After doing this, we can index the array with square brackets ([ and ]) like this: ARRAY_NAME[INDEX] where ARRAY_NAME is the name of the array (here vector) and INDEX is an expression that evaluates to integer, starting with 0 and going up to the vector length minus one (remember that programmers count from zero). So the first item of the array is at index 0, the second at index 1 etc. The index can be a numeric constant like 3, but also a variable or a whole expression such as x + 3 * myFunction(). Indexed array can be used just like any other variable, you can assign to it, you can use it in expressions etc. This is seen in the example. Trying to access an item beyond the array's bounds (e.g. vector[100]) will likely crash your program.

Especially important are the parts of code staring with for (int i = 0; i < 5; ++i): this is an iteration over the array. It's a very common pattern that we use whenever we need to perform some action with every item of the array.

Arrays can also be multidimensional, but we won't bothered with that right now.

Why are arrays so important? They allow us to work with great number of data, not just a handful of numeric variables. We can create an array of million structs and easily work with all of them thanks to indexing and loops, this would be practically impossible without arrays. Imagine e.g. a game of chess; it would be very silly to have 64 plain variables for each square of the board (squareA1, squareA2, ..., squareH8), it would be extremely difficult to work with such code. With an array we can represent the square as a single array, we can iterate over all the squares easily etc.

One more thing to mention about arrays is how they can be passed to functions. A function can have as a parameter an array of fixed or unknown length. There is also one exception with arrays as opposed to other types: if a function has an array as parameter and the function modifies this array, the array passed to the function (the argument) will be modified as well (we say that arrays are passed by reference while other types are passed by value). We know this wasn't the case with other parameters such as int -- for these the function makes a local copy that doesn't affect the argument passed to the function. The following example shows what's been said:

#include <stdio.h>

// prints an int array of lengt 10
void printArray10(int array[10])
  for (int i = 0; i < 10; ++i)
    printf("%d ",array[i]);

// prints an int array of arbitrary lengt
void printArrayN(int array[], int n)
  for (int i = 0; i < n; ++i)
    printf("%d ",array[i]);

// fills an array with numbers 0, 1, 2, ...
void fillArrayN(int array[], int n)
  for (int i = 0; i < n; ++i)
    array[i] = i;

int main(void)
  int array10[10];
  int array20[20];
  return 0;

The function printArray10 has a fixed length array as a parameter (int array[10]) while printArrayN takes as a parameter an array of unknown length (int array[]) plus one additional parameter to specify this length (so that the function knows how many items of the array it should print). The function printArray10 is important because it shows how a function can modify an array: when we call fillArrayN(array10,10); in the main function, the array array10 will be actually modified after when the function finishes (it will be filled with numbers 0, 1, 2, ...). This can't be done with other data types (though there is a trick involving pointers which we will learn later).

Now let's finally talk about text strings. We've already seen strings (such as "hello"), we know we can print them, but what are they really? A string is a data type, and from C's point of view strings are nothing but arrays of chars (text characters), i.e. sequences of chars in memory. In C every string has to end with a 0 char -- this is NOT '0' (whose ASCII value is 48) but the direct value 0 (remember that chars are really just numbers). The 0 char cannot be printed out, it is just a helper value to terminate strings. So to store a string "hello" in memory we need an array of length at least 6 -- one for each character plus one for the terminating 0. These types of string are called zero terminated strings (or C strings).

When we write a string such as "hello" in our source, the C compiler creates an array in memory for us and fills it with characters 'h', 'e', 'l', 'l', 'o', 0. In memory this may look like a sequence of numbers 104, 101, 108, 108 111, 0.

Why do we terminate strings with 0? Because functions that work with strings (such as puts or printf) don't know what length the string is. We can call puts("abc"); or puts("abcdefghijk"); -- the string passed to puts has different length in each case, and the function doesn't know this length. But thanks to these strings ending with 0, the function can compute the length, simply by counting characters from the beginning until it finds 0 (or more efficiently it simply prints characters until it finds 0).

The syntax that allows us to create strings with double quotes (") is just a helper (syntactic sugar); we can create strings just as any other array, and we can work with them the same. Let's see an example:

#include <stdio.h>

int main(void)
  char alphabet[27]; // 26 places for letters + 1 for temrinating 0
  for (int i = 0; i < 26; ++i)
    alphabet[i] = 'A' + i;
  alphabet[26] = 0; // terminate the string
  return 0;

alphabet is an array of chars, i.e. a string. Its length is 27 because we need 26 places for letters and one extra space for the terminating 0. Here it's important to remind ourselves that we count from 0, so the alphabet can be indexed from 0 to 26, i.e. 26 is the last index we can use, doing alphabet[27] would be an error! Next we fill the array with letters (see how we can treat chars as numbers and do 'A' + i). We iterate while i < 26, i.e. we will fill all the places in the array up to the index 25 (including) and leave the last place (with index 26) empty for the terminating 0. That we subsequently assign. And finally we print the string with puts(alphabet) -- here note that there are no double quotes around alphabet because its a variable name. Doing puts("alphabet") would cause the program to literally print out alphabet. Now the program outputs:


In C there is a standard library for working with strings called string (#include <string.h>), it contains such function as strlen for computing string length or strcmp for comparing strings.

One final example -- a creature generator -- will show all the three new data types in action:

#include <stdio.h>
#include <stdlib.h> // for rand()

typedef struct
  char name[4]; // 3 letter name + 1 place for 0
  int weightKg;
  int legCount;
} Creature; // some weird creature

Creature creatures[100]; // global array of Creatures

void printCreature(Creature c)
  printf("Creature named %s ",c.name); // %s prints a string
  printf("(%d kg, ",c.weightKg);
  printf("%d legs)\n",c.legCount);

int main(void)
  // generate random creatures:
  for (int i = 0; i < 100; ++i)
    Creature c;
    c.name[0] = 'A' + (rand() % 26);
    c.name[1] = 'a' + (rand() % 26);
    c.name[2] = 'a' + (rand() % 26);
    c.name[3] = 0; // terminate the string

    c.weightKg = 1 + (rand() % 1000); 
    c.legCount = 1 + (rand() % 10); // 1 to 10 legs

    creatures[i] = c;
  // print the creatures:
  for (int i = 0; i < 100; ++i)
  return 0;

When run you will see a list of 100 randomly generated creatures which may start e.g. as:

Creature named Nwl (916 kg, 4 legs)
Creature named Bmq (650 kg, 2 legs)
Creature named Cda (60 kg, 4 legs)
Creature named Owk (173 kg, 7 legs)
Creature named Hid (430 kg, 3 legs)


The C language comes with a feature called preprocessor which is necessary for some advanced things. It allows automatized modification of the source code before it is compiled.

Remember how we said that compiler compiles C programs in several steps such as generating object files and linking? There is one more step we didn't mention: preprocessing. It is the very first step -- the source code you give to the compiler first goes to the preprocessor which modifies it according to special commands in the source code called preprocessor directives. The result of preprocessing is a pure C code without any more preprocessing directives, and this is handed over to the actual compilation.

The preprocessor is like a mini language on top of the C language, it has its own commands and rules, but it's much more simple than C itself, for example it has no data types or loops.

Each directive begins with #, is followed by the directive name and continues until the end of the line (\ can be used to extend the directive to the next line).

We have already encountered one preprocessor directive: the #include directive when we included library header files. This directive pastes a text of the file whose name it is handed to the place of the directive.

Another directive is #define which creates so called macro -- in its basic form a macro is nothing else than an alias, a nickname for some text. This is used to create constants. Consider the following code:

#include <stdio.h>

#define ARRAY_SIZE 10

int array[ARRAY_SIZE];

void fillArray(void)
  for (int i = 0; i < ARRAY_SIZE; ++i)
    array[i] = i;

void printArray(void)
  for (int i = 0; i < ARRAY_SIZE; ++i)
    printf("%d ",array[i]);

int main()
  return 0;

#define ARRAY_SIZE 10 creates a macro that can be seen as a constant named ARRAY_SIZE which stands for 10. From this line on any occurence of ARRAY_SIZE that the preprocessor encounters in the code will be replaced with 10. The reason for doing this is obvious -- we respect the DRY (don't repeat yourself) principle, if we didn't use a constant for the array size and used the direct numeric value 10 in different parts of the code, it would be difficult to change them all later, especially in a very long code, there's a danger we'd miss some. With a constant it is enough to change one line in the code (e.g. #define ARRAY_SIZE 10 to #define ARRAY_SIZE 20).

The macro substitution is literally a copy-paste text replacement, there is nothing very complex going on. This means you can create a nickname for almost anything (for example you could do #define when if and then also use when in place of if -- but it's probably not a very good idea). By convention macro names are to be ALL_UPPER_CASE (so that whenever you see an all upper case word in the source code, you know it's a macro).

Macros can optionally take parameters similarly to functions. There are no data types, just parameter names. The usage is demonstrated by the following code:

#include <stdio.h>

#define MEAN3(a,b,c) (((a) + (b) + (c)) / 3) 

int main()
  int n = MEAN3(10,20,25);
  return 0;

MEAN3 computes the mean of 3 values. Again, it's just text replacement, so the line int n = MEAN3(10,20,25); becomes int n = (((10) + (20) + (25)) / 3); before code compilation. Why are there so many brackets in the macro? It's always good to put brackets over a macro and all its parameters because the parameters are again a simple text replacement; consider e.g. a macro #define HALF(x) x / 2 -- if it was invoked as HALF(5 + 1), the substitution would result in the final text 5 + 1 / 2, which gives 5 (instead of the intended value 3).

You may be asking why would we use a macro when we can use a function for computing the mean? Firstly macros don't just have to work with numbers, they can be used to generate parts of the source code in ways that functions can't. Secondly using a macro may sometimes be simpler, it's shorter and will be faster to execute because the is no function call (which has a slight overhead) and because the macro expansion may lead to the compiler precomputing expressions at compile time. But beware: macros are usually worse than functions and should only be used in very justified cases. For example macros don't know about data types and cannot check them, and they also result in a bigger compiled executable (function code is in the executable only once whereas the macro is expanded in each place where it is used and so the code it generates multiplies).

Another very useful directive is #if for conditional inclusion or exclusion of parts of the source code. It is similar to the C if command. The following example shows its use:

#include <stdio.h>

#define RUDE 0

void printNumber(int x)
#if RUDE
    "You idiot, the number is:"
    "The number is:"

int main()
#if RUDE
  puts("Bye bitch.");
  return 0;

When run, we get the output:

The number is:
The number is:

And if we change #define RUDE 0 to #define RUDE 1, we get:

You idiot, the number is:
You idiot, the number is:
Bye bitch.

We see the #if directive has to have a corresponding #endif directive that terminates it, and there can be an optional #else directive for an else branch. The condition after #if can use similar operators as those in C itself (+, ==, &&, || etc.). There also exists an #ifdef directive which is used the same and checks if a macro of given name has been defined.

#if directives are very useful for conditional compilation, they allow for creation of various "settings" and parameters that can fine-tune a program -- you may turn specific features on and off with this directive. It is also helpful for portability; compilers may automatically define specific macros depending on the platform (e.g. _WIN64, __APPLE__, ...) based on which you can trigger different code. E.g.:

#ifdef _WIN64
  puts("Your OS sucks.");

Let us talk about one more thing that doesn't fall under the preprocessor language but is related to constants: enumerations. Enumeration is a data type that can have values that we specify individually, for example:

typedef enum
} Fruit;

This creates a new data type Fruit. Variables of this type may have values APPLE, PEAR or TOMATO, so we may for example do Fruit myFruit = APPLE;. These values are in fact integers and the names we give them are just nicknames, so here APPLE is equal to 0, PEAR to 1 and TOMATO to 2.


Pointers are an advanced topic that many people fear -- many complain they're hard to learn, others complain about memory unsafety and potential dangers of using pointers. These people are stupid, pointers are great.

But beware, there may be too much new information in the first read. Don't get scared, give it some time.

Pointers allow us to do certain advanced things such as allocate dynamic memory, return multiple values from functions, inspect content of memory or use functions in similar ways in which we use variables.

A pointer is nothing complicated: it is a data type that can hold a memory address (plus the information of what data type should be stored at that address). An address is simply a number. Why can't we simply use an int for an address? Because the size of int and a pointer may differ, the size of pointer depends on each platform's address width. It is also good when the compiler knows a certain variable is supposed to point to a memory (and to which type) -- this can prevent bugs.

It's important to remember that a pointer is not a pure address but it also knows about the data type it is pointing to, so there are many kinds of pointers: a pointer to int, a pointer to char, a pointer to a specific struct type etc.

A variable of pointer type is created similarly to a normal variable, we just add * after the data type, for example int *x; creates a variable named x that is a pointer to int (some people would write this as int* x;).

But how do we assign a value to the pointer? To do this, we need an address of something, e.g. of some variable. To get an address of a variable we use the & character, i.e. &a is the address of a variable a.

The last basic thing we need to know is how to dereference a pointer. Dereferencing means accessing the value at the address that's stored in the pointer, i.e. working with the pointed to value. This is again done (maybe a bit confusingly) with * character in front of a pointer, e.g. if x is a pointer to int, *x is the int value to which the pointer is pointing. An example can perhaps make it clearer.

#include <stdio.h>

int main(void)
  int normalVariable = 10;
  int *pointer;
  pointer = &normalVariable;
  printf("address in pointer: %p\n",pointer);
  printf("value at this address: %d\n",*pointer);
  *pointer = *pointer + 10;
  printf("normalVariable: %d\n",normalVariable);
  return 0;

This may print e.g.:

address in pointer: 0x7fff226fe2ec
value at this address: 10
normalVariable: 20

int *pointer; creates a pointer to int with name pointer. Next we make the pointer point to the variable normalVariable, i.e. we get the address of the variable with &normalVariable and assign it normally to pointer. Next we print firstly the address in the pointer (accessed with pointer) and the value at this address, for which we use dereference as *pointer. At the next line we see that we can also use dereference for writing to the pointed address, i.e. doing *pointer = *pointer + 10; here is the same as doing normalVariable = normalVariable + 10;. The last line shows that the value in normalVariable has indeed changed.

IMPORTANT NOTE: You generally cannot read and write from/to random addresses! This will crash your program. To be able to write to a certain address it must be allocated, i.e. reserved for use. Addresses of variables are allocated by the compiler and can be safely operated with.

There's a special value called NULL (a macro defined in the standard library) that is meant to be assigned to pointer that points to "nothing". So when we have a pointer p that's currently not supposed to point to anything, we do p = NULL;. In a safe code we should always check (with if) whether a pointer is not NULL before dereferencing it, and if it is, then NOT dereference it. This isn't required but is considered a "good practice" in safe code, storing NULL in pointers that point nowhere prevents dereferencing random or unallocated addresses which would crash the program.

But what can pointers be good for? Many things, for example we can kind of "store variables in variables", i.e. a pointer is a variable which says which variable we are now using, and we can switch between variable any time. E.g.:

#include <stdio.h>

int backAccountMonica = 1000;
int backAccountBob = -550;
int backAccountJose = 700;

int *payingAccount; // pointer to who's currently paying

void payBills(void)
  *payingAccount -= 200;

void buyFood(void)
  *payingAccount -= 50;

void buyGas(void)
  *payingAccount -= 20;

int main(void)
  // let Jose pay first
  payingAccount = &backAccountJose;
  // that's enough, now let Monica pay 

  payingAccount = &backAccountMonica;

  // now it's Bob's turn
  payingAccount = &backAccountBob;
  printf("Monika has $%d left.\n",backAccountMonica);
  printf("Jose has $%d left.\n",backAccountJose);
  printf("backAccountBob has $%d left.\n",backAccountBob);
  return 0;

Well, this could be similarly achieved with arrays, but pointers have more uses. For example they allow us to return multiple values by a function. Again, remember that we said that (with the exception of arrays) a function cannot modify a variable passed to it because it always makes its own local copy of it? We can bypass this by, instead of giving the function the value of the variable, giving it the address of the variable. The function can read the value of that variable (with dereference) but it can also CHANGE the value, it simply writes a new value to that address (again, using dereference). This example shows it:

#include <stdio.h>
#include <math.h>

#define PI 3.141592

// returns 2D coordinates of a point on a unit circle
void getUnitCirclePoint(float angle, float *x, float *y)
  *x = sin(angle);
  *y = cos(angle);

int main(void)
  for (int i = 0; i < 8; ++i)
    float pointX, pointY;
    getUnitCirclePoint(i * 0.125 * 2 * PI,&pointX,&pointY);
    printf("%lf %lf\n",pointX,pointY);
  return 0;

Function getUnitCirclePoint doesn't return any value in the strict sense, but thank to pointers it effectively returns two float values via its parameters x and y. These parameters are of the data type pointer to int (as there's * in front of them). When we call the function with getUnitCirclePoint(i * 0.125 * 2 * PI,&pointX,&pointY);, we hand over the addresses of the variables pointX and pointY (which belong to the main function and couldn't normally be accessed in getUnitCirclePoint). The function can then compute values and write them to these addresses (with dereference, *x and *y), changing the values in pointX and pointY, effectively returning two values.

Now let's take a look at pointers to structs. Everything basically works the same here, but there's one thing to know about, a syntactic sugar known as an arrow (->). Example:

#include <stdio.h>

typedef struct
  int a;
  int b;
} SomeStruct;

SomeStruct s;
SomeStruct *sPointer;

int main(void)
  sPointer = &s;
  (*sPointer).a = 10; // without arrow
  sPointer->b = 20;   // same as (*sPointer).b = 20
  return 0;

Here we are trying to write values to a struct through pointers. Without using the arrow we can simply dereference the pointer with *, put brackets around and access the member of the struct normally. This shows the line (*sPointer).a = 10;. Using an arrow achieves the same thing but is perhaps a bit more readable, as seen in the line sPointer->b = 20;. The arrow is simply a special shorthand and doesn't need any brackets.

Now let's talk about arrays -- these are a bit special. The important thing is that an array is itself basically a pointer. What does this mean? If we create an array, let's say int myArray[10];, then myArray is basically a pointer to int in which the address of the first array item is stored. When we index the array, e.g. like myArray[3] = 1;, behind the scenes there is basically a dereference because the index 3 means: 3 places after the address pointed to by myArray. So when we index an array, the compiler takes the address stored in myArray (the address of the array start) and adds 3 to it (well, kind of) by which it gets the address of the item we want to access, and then dereferences this address.

Arrays and pointer are kind of a duality -- we can also use array indexing with pointers. For example if we have a pointer declared as int *x;, we can access the value x points to with a dereference (*x), but ALSO with indexing like this: x[0]. Accessing index 0 simply means: take the value stored in the variable and add 0 to it, then dereference it. So it achieves the same thing. We can also use higher indices (e.g. x[10]), BUT ONLY if x actually points to a memory that has at least 11 allocated places.

This leads to a concept called pointer arithmetic. Pointer arithmetic simply means we can add or subtract numbers to pointer values. If we continue with the same pointer as above (int *x;), we can actually add numbers to it like *(x + 1) = 10;. What does this mean?! It means exactly the same thing as x[1]. Adding a number to a pointer shifts that pointer given number of places forward. We use the word places because each data type takes a different space in memory, for example char takes one byte of memory while int takes usually 4 (but not always), so shifting a pointer by N places means adding N times the size of the pointed to data type to the address stored in the pointer.

This may be a lot information to digest. Let's provide an example to show all this in practice:

#include <stdio.h>

// our own string print function
void printString(char *s)
  int position = 0;
  while (s[position] != 0)
    position += 1;

// returns the length of string s
int stringLength(char *s)
  int length = 0;
  while (*s != 0) // count until terminating 0
    length += 1;
    s += 1; // shift the pointer one character to right
  return length;

int main(void)
  char testString[] = "catdog";
  printString("The string '");
  printString("' has length ");
  int l = stringLength(testString);

  return 0;

The output is:

The string 'catdog' has length 6.

We've created a function for printing strings (printString) similar to puts and a function for computing the length of a string (stringLength). They both take as an argument a pointer to char, i.e. a string. In printString we use indexing ([ and ]) just as if s was an array, and indeed we see it works! In stringLength we similarly iterate over all characters in the string but we use dereference (*s) and pointer arithmetic (s += 1;). It doesn't matter which of the two styles we choose -- here we've shown both, for educational purposes. Finally notice that the string we actually work with is created in main as an array with char testString[] = "catdog"; -- here we don't need to specify the array size between [ and ] because we immediately assign a string literal to it ("catdog") and in such a case the compiler knows how big the array needs to be and automatically fills in the correct size.

Now that know about pointers, we can finally completely explain the functions from stdio we've been using:


Now we'll take a look at how we can read and write from/to files on the computer disk which enables us to store information permanently or potentially process data such as images or audio. Files aren't so difficult.

We work with files through functions provided in the stdio library (so it has to be included). We distinguish two types of files:

From programmer's point of view there's actually not a huge difference between the two, they're both just sequences of characters or bytes (which are kind of almost the same). Text files are a little more abstract, they handle potentially different format of newlines etc. The main thing for us is that we'll use slightly different functions for each type.

There is a special data type for file called FILE (we'll be using a pointer to it). Whatever file we work with, we need to firstly open it with the function fopen and when we're done with it, we need to close it with a function fclose.

First we'll write something to a text file:

#include <stdio.h>

int main(void)
  FILE *textFile = fopen("test.txt","w"); // "w" for write

  if (textFile != NULL) // if opened successfully
    fprintf(textFile,"Hello file.");
    puts("ERROR: Couldn't open file.");


  return 0;

When run, the program should create a new file named test.txt in the same directory we're in and in it you should find the text Hello file.. FILE *textFile creates a new variable textFile which is a pointer to the FILE data type. We are using a pointer simply because the standard library is designed this way, its functions work with pointers (it can be more efficient). fopen("test.txt","w"); attempts to open the file test.txt in text mode for writing -- it returns a pointer that represents the opened file. The mode, i.e. text/binary, read/write etc., is specified by the second argument: "w"; w simply specifies write and the text mode is implicit (it doesn't have to be specified). if (textFile != NULL) checks if the file has been successfully opened; the function fopen returns NULL (the value of "point to nothing" pointers) if there was an error with opening the file (such as that the file doesn't exist). On success we write text to the file with a function fprintf -- it's basically the same as printf but works on files, so it's first parameter is always a pointer to a file to which it should write. You can of course also print numbers and anything that printf can with this function. Finally we mustn't forget to close the file at the end with fclose!

Now let's write another program that reads the file we've just created and writes its content out in the command line:

#include <stdio.h>

int main(void)
  FILE *textFile = fopen("test.txt","r"); // "r" for read

  if (textFile != NULL) // if opened successfully
    char c;

    while (fscanf(textFile,"%c",&c) != EOF) // while not end of file
    puts("ERROR: Couldn't open file.");


  return 0;

Notice that in fopen we now specify "w" (write) as a mode. Again, we check if the file has been opened successfully (if (textFile != NULL)). If so, we use a while loop to read and print all characters from the file until we encounter the end of file. The reading of file characters is done with the fscanf function inside the loop's condition -- there's nothing preventing us from doing this. fscanf again works the same as scanf (so it can read other types than only chars), just on files (its first argument is the file to read from). On encountering end of file fscanf returns a special value EOF (which is macro constant defined in the standard library). Again, we must close the file at the end with fclose.

We will now write to a binary file:

#include <stdio.h>

int main(void)
  unsigned char image[] = // image in ppm format
    80, 54, 32, 53, 32, 53, 32, 50, 53, 53, 32,
    255,255,255, 255,255,255, 255,255,255, 255,255,255, 255,255,255,
    255,255,255,    0, 0,  0, 255,255,255,   0,  0,  0, 255,255,255,
    255,255,255, 255,255,255, 255,255,255, 255,255,255, 255,255,255,
      0,  0,  0, 255,255,255, 255,255,255, 255,255,255,   0,  0,  0,
    255,255,255,   0,  0,  0,   0,  0,  0,   0,  0,  0, 255,255,255  

  FILE *binFile = fopen("image.ppm","wb");

  if (binFile != NULL) // if opened successfully
    puts("ERROR: Couldn't open file.");


  return 0;

Okay, don't get scared, this example looks complex because it is trying to do a cool thing: it creates an image file! When run, it should produce a file named image.ppm which is a tiny 5x5 smiley face image in ppm format. You should be able to open the image in any good viewer (I wouldn't bet on Windows programs though). The image data was made manually and are stored in the image array. We don't need to understand the data, we just know we have some data we want to write to a file. Notice how we can manually initialize the array with values using { and } brackets. We open the file for writing and in binary mode, i.e. with a mode "wb", we check the success of the action and then write the whole array into the file with one function call. The function is name fwrite and is used for writing to binary files (as opposed to fprintf for text files). fwrite takes these parameters: pointer to the data to be written to the file, size of one data element (in bytes), number of data elements and a pointer to the file to write to. Our data is the image array and since "arrays are basically pointers", we provide it as the first argument. Next argument is 1 (unsigned char always takes 1 byte), then length of our array (sizeof is a special operator that substitutes the size of a variable in bytes -- since each item in our array takes 1 byte, sizeof(image) provides the number of items in the array), and the file pointer. At the end we close the file.

And finally we'll finish with reading this binary file back:

#include <stdio.h>

int main(void)
  FILE *binFile = fopen("image.ppm","rb");

  if (binFile != NULL) // if opened successfully
    unsigned char byte;

    while (fread(&byte,1,1,binFile))
      printf("%d ",byte);

    puts("ERROR: Couldn't open file.");


  return 0;

The file mode is now "rb" (read binary). For reading from binary files we use the fread function, similarly to how we used fscanf for reading from a text file. fread has these parameters: pointer where to store the read data (the memory must have sufficient space allocated!), size of one data item, number of items to read and the pointer to the file which to read from. As the first argument we pass &byte, i.e. the address of the variable byte, next 1 (we want to read a single byte whose size in bytes is 1), 1 (we want to read one byte) and the file pointer. fread returns the number of items read, so the while condition holds as long as fread reads bytes; once we reach end of file, fread can no longer read anything and returns 0 (which in C is interpreted as a false value) and the loop ends. Again, we must close the file at the end.

More On Functions (Recursion, Function Pointers)

There's more to be known about functions.

An important concept in programming is recursion -- the situation in which a function calls itself. Yes, it is possible, but some rules have to be followed.

When a function calls itself, we have to ensure that we won't end up in infinite recursion (i.e. the function calls itself which subsequently calls itself and so on until infinity). This crashes our program. There always has to be a terminating condition in a recursive function, i.e. an if branch that will eventually stop the function from calling itself again.

But what is this even good for? Recursion is actually very common in math and programming, many problems are recursive in nature. Many things are beautifully described with recursion (e.g. fractals). But remember: anything a recursion can achieve can also be achieved by iteration (loop) and vice versa. It's just that sometimes one is more elegant or more computationally efficient.

Let's see this on a typical example of the mathematical function called factorial. Factorial of N is defined as N x (N - 1) x (N - 2) x ... x 1. It can also be defined recursively as: factorial of N is 1 if N is 0, otherwise N times factorial of N - 1. Here is some code:

#include <stdio.h>

unsigned int factorialRecursive(unsigned int x)
  if (x == 0) // terminating condition
    return 1;
    return x * factorialRecursive(x - 1);

unsigned int factorialIterative(unsigned int x)
  unsigned int result = 1;
  while (x > 1)
    result *= x;
  return result;

int main(void)
  printf("%d %d\n",factorialRecursive(5),factorialIterative(5));
  return 0;

factorialIterative computes the factorial by iteration. factorialRecursive uses recursion -- it calls itself. The important thing is the recursion is guaranteed to end because every time the function calls itself, it passes a decremented argument so at one point the function will receive 0 in which case the terminating condition (if (x == 0)) will be triggered which will avoid the further recursive call.

It should be mentioned that performance-wise recursion is almost always worse than iteration (function calls have certain overhead), so in practice it is used sparingly. But in some cases it is very well justified (e.g. when it makes code much simpler while creating unnoticeable performance loss).

Another thing to mention is that we can have pointers to functions; this is an advanced topic so we'll stay at it just briefly. Function pointers are pretty powerful, they allow us to create so called callbacks: imagine we are using some GUI framework and we want to tell it what should happen when a user clicks on a specific button -- this is usually done by giving the framework a pointer to our custom function that it will be called by the framework whenever the button is clicked.

Dynamic Allocation (Malloc)

Dynamic memory allocation means the possibility of reserving additional memory (RAM) for our program at run time, whenever we need it. This is opposed to static memory allocation, i.e. reserving memory for use at compile time (when compiling, before the program runs). We've already been doing static allocation whenever we created a variable -- compiler automatically reserves as much memory for our variables as is needed. But what if we're writing a program but don't yet know how much memory it will need? Maybe the program will be reading a file but we don't know how big that file is going to be -- how much memory should we reserve? Dynamic allocation allows us to reserve this memory with functions when the program is actually runing and already knows how much of it should be reserved.

It must be known that dynamic allocation comes with a new kind of bug known as a memory leak. It happens when we reserve a memory and forget to free it after we no longer need it. If this happens e.g. in a loop, the program will continue to "grow", eat more and more RAM until operating system has no more to give. For this reason, as well as others such as simplicity, it may sometimes be better to go with only static allocation.

Anyway, let's see how we can allocate memory if we need to. We use mostly just two functions that are provided by the stdlib library. One is malloc which takes as an argument size of the memory we want to allocate (reserve) in bytes and returns a pointer to this allocated memory if successful or NULL if the memory couldn't be allocated (which in serious programs we should always check). The other function is free which frees the memory when we no longer need it (every allocated memory should be freed at some point) -- it takes as the only parameter a pointer to the memory we've previously allocated. There is also another function called realloc which serves to change the size of an already allocated memory: it takes a pointer the the allocated memory and the new size in byte, and returns the pointer to the resized memory.

Here is an example:

#include <stdio.h>
#include <stdlib.h>

#define ALLOCATION_CHUNK 32 // by how many bytes to resize

int main(void)
  int charsRead = 0;
  int resized = 0; // how many times we called realloc
  char *inputChars = malloc(ALLOCATION_CHUNK * sizeof(char));

  while (1) // read input characters
    char c = getchar();
    if (c == '\n')
    if ((charsRead % ALLOCATION_CHUNK) == 0)
      inputChars = // we need more space, resize the array
        realloc(inputChars,(charsRead / ALLOCATION_CHUNK + 1) * ALLOCATION_CHUNK * sizeof(char));

    inputChars[charsRead] = c;
  puts("The string you entered backwards:");
  while (charsRead > 0)
    putchar(inputChars[charsRead - 1]);

  free(inputChars); // important!
  printf("I had to resize the input buffer %d times.",resized);
  return 0;

This code reads characters from the input and stores them in an array (inputChars) -- the array is dynamically resized if more characters are needed. (We restraing from calling the array inputChars a string because we never terminate it with 0, we couldn't print it with standard functions like puts.) At the end the entered characters are printed backwards (to prove we really stored all of them), and we print out how many times we needed to resize the array.

We define a constant (macro) ALLOCATION_CHUNK that says by how many characters we'll be resizing our character buffer. I.e. at the beginning we create a character buffer of size ALLOCATION_CHUNK and start reading input character into it. Once it fills up we resize the buffer by another ALLOCATION_CHUNK characters and so on. We could be resizing the buffer by single characters but that's usually inefficient (the function malloc may be quite complex and take some time to execute).

The line starting with char *inputChars = malloc(... creates a pointer to char -- our character buffer -- to which we assign a chunk of memory allocated with malloc. Its size is ALLOCATION_CHUNK * sizeof(char). Note that for simplicity we don't check if inputChars is not NULL, i.e. whether the allocation succeeded -- but in your program you should do it :) Then we enter the character reading loop inside which we check if the buffer has filled up (if ((charsRead % ALLOCATION_CHUNK) == 0)). If so, we used the realloc function to increase the size of the character buffer. The important thing is that once we exit the loop and print the characters stored in the buffer, we free the memory with free(inputChars); as we no longer need it.

Debugging, Optimization

Debugging means localizing and fixing bugs (errors) in your program. In practice there are always bugs, even in very short programs (you've probably already figured that out yourself), some small and insignificant and some pretty bad ones that make your program unusable, vulnerable or even dangerous.

There are two kinds of bugs: syntactic errors and semantic errors. A syntactic error is when you write something not obeying the C grammar, it's like a typo or grammatical error in a normal language -- these errors are very easy to detect and fix, a compiler won't be able to understand your program and will point you to the exact place where the error occurs. A semantic error can be much worse -- it's a logical error in the program; the program will compile and run but the program will behave differently than intended. The program may crash, leak memory, give wrong results, run slowly, corrupt files etc. These errors may be hard to spot and fix, especially when they happen in rare situations. We'll be only considering semantic errors from now on.

If we spot a bug, how do we fix it? The first thing is to find a way to replicate it, i.e. find the exact steps we need to make with the program to make the bug appear (e.g. "in the menu press keys A and B simultaneously", ...). Next we need to trace and locate which exact line or piece of code is causing the bug. This can either be done with the help of specialized debuggers such as gdb or valgrind, but there's usually a much easier way of using printing functions such as printf. (Still do check out the above mentioned debuggers, they're very helpful.)

Let's say your program crashes and you don't know at which line. You simply put prints such as printf("A\n"); and printf("B\n); at the beginning and end of a code you suspect might be causing the crash. Then you run the program: if A is printed but B isn't, you know the crash happened somewhere between the two prints, so you shift the B print a little bit up and so on until you find exactly after which line B stops printing -- this is the line that crashes the program. IMPORTANT NOTE: the prints have to have newline (\n) at the end, otherwise this method may not work because of output buffering.

Of course, you may use the prints in other ways, for example to detect at which place a value of variable changes to a wrong value. (Asserts are also good for keeping an eye on correct values of variables.)

What if the program isn't exactly crashing but is giving wrong results? Then you need to trace the program step by step (not exactly line by line, but maybe function by function) and check which step has a problem in it. If for example your game AI is behaving stupid, you firstly check (with prints) if it correctly detects its circumstances, then you check whether it makes the correct decision based on the circumstances, then you check whether the pathfinding algorithm finds the correct path etc. At each step you need to know what the correct behavior should be and you try to find where the behavior is broken.

Knowing how to fix a bug isn't everything, we also need to find the bugs in the first place. Testing is the process of trying to find bugs by simply running and using the program. Remember, testing can't prove there are no bugs in the program, it can only prove bugs exist. You can do testing manually or automate the tests. Automated tests are very important for preventing so called regressions (so the tests are called regression tests). Regression happens when during further development you break some of its already working features (it is very common, don't think it won't be happening to you). Regression test (which can simply be just a normal C program) simply automatically checks whether the already implemented functions still give the same results as before (e.g. if sin(0) = 0 etc.). These tests should be run and pass before releasing any new version of the program (or even before any commit of new code).

Optimization is also a process of improving an already working program, but here we try to make the program more efficient -- the most common goal is to make the program faster, smaller or consume less RAM. This can be a very complex task, so we'll only mention it briefly.

The very basic thing we can do is to turn on automatic optimization with a compiler flag: -O3 for speed, -Os for program size (-O2 and -O1 are less aggressive speed optimizations). Yes, it's that simple, you simply add -O3 and your program gets magically faster. Remember that optimizations against different resources are often antagonistic, i.e. speeding up your program typically makes it consume more memory and vice versa. You need to choose. Optimizing manually is a great art. Let's suppose you are optimizing for speed -- the first, most important thing is to locate the part of code that's slowing down you program the most, so called bottleneck. That is the code you want to make faster. Trying to optimize non-bottlenecks doesn't speed up your program as a whole much; imagine you optimize a part of the code that takes 1% of total execution time by 200% -- your program will only get 0.5% faster. Bottlenecks can be found using profiling -- measuring the execution time of different parts of the program (e.g. each function). This can be done manually or with tools such a gprof. Once you know where to optimize, you try to apply different techniques: using algorithms with better time complexity, using look up tables, optimizing cache behavior and so on. This is beyond the scope of this tutorial.

Final Program


Where To Go Next

We haven't nearly covered the whole of C, but you should have pretty solid basics now. Now you just have to go and write a lot of C programs, that's the only way to truly master C. WARNING: Do not start with an ambitious project such as a 3D game. You won't make it and you'll get demotivated. Start very simple (a Tetris clone perhaps?).

You should definitely learn about common data structures (linked lists, binary trees, hash tables, ...) and algorithms (sorting, searching, ...). As an advanced programmer you should definitely know a bit about memory management. Also take a look at basic licensing. Another thing to learn is some version control system, preferably git, because this is how we manage bigger programs and how we collaborate on them. To start making graphical programs you should get familiar with some library such as SDL.

A great amount of experience can be gained by contributing to some existing project, collaboration really boosts your skill and knowledge of the language. This should only be done when you're at least intermediate. Firstly look up a nice project on some git hosting site, then take a look at the bug tracker and pick a bug or feature that's easy to fix or implement (low hanging fruit).


Data Hoarding



Data Structure

Data structure refers to a any specific way in which data is organized in computer memory. A specific data structure describes such things as order, relationships (interconnection, hierarchy, ...), formats and types of parts of the data. Programming is sometimes seen as consisting mainly of two things: design of algorithms and data structures these algorithm work with.

As a programmer dealing with a specific problem you oftentimes have a choice of multiple data structures -- choosing the right one is essential for performance and efficiency of your program. As with everything, each data structure has advantages and also its downsides; some are faster, some take less memory etc. For example for a searchable database of text string we can be choosing between a binary tree and a hash table; hash table offers theoretically much faster search, but binary trees may be more memory efficient and offer many other efficient operations like range search and sorting (which hash tables can do but very inefficiently).

Specific Data Structures

These are just some common ones:

See Also


De Facto

De facto is Latin for "in fact" or "by facts", it means that something holds in practice; it is contrasted with de jure ("by law"). We use the term to say whether something is actually true in reality as opposed to "just on paper".

For example in technology a so called de facto standard is something that, without it being officially formalized or forced by law in prior, most developers naturally come to adopt so as to keep compatibility; for example the Markdown format has become the de facto standard for READMEs in FOSS development. Of course it happens often that de facto standards are later made into official standards. On the other hand there may be standards that are created by official standardizing authorities, such as the state, which however fail to gain wide adoption in practice -- these are official standards but not de facto one. TODO: example? :)

Regarding politics and society, we often talk about de facto freedom vs de jure freedom. For example in the context of free (as in freedom) software it is stressed that software ought to bear a free license -- this is to ensure de jure freedom, i.e. legal rights to being able to use, study, modify and share such software. However in these talks the de facto freedom of software is often forgotten; the legal (de jure) freedom is worth nothing if it doesn't imply real and practical (de facto) freedom to exercise the rights given by the license; for example if a piece of "free" (having a free license) software is extremely bloated, our practical ability to study and modify it gets limited because doing so gets considerably expensive and therefore limits the number of people who can truly exercise those rights in practice. This issue of diminishing de facto freedom of free software is addressed e.g. by the suckless movement, and of course our LRS movement.

There is also a similar situation regarding free speech: if speech is free only de jure, i.e. we can "in theory" legally speek relatively freely, BUT if then in reality we also CANNOT speek freely because e.g. of fear of being cancelled, our speech is de facto not free.


Deferred Shading

In computer graphics programming deferred shading is a technique for speeding up the rendering of (mainly) shaded 3D graphics (i.e. graphics with textures, materials, normal maps etc.). It is nowadays used in many advanced 3D engines. In principle of course the idea may also be used in 2D graphics and outside graphics.

The principle is following: in normal forward shading (non-deferred) the shading computation is applied immediately to any rendered pixel (fragment) as they are rendered. However, as objects can overlap, many of these expensively computed pixels may be overwritten by pixels of other objects, so many pixels end up being expensively computed but invisible. This is of course wasted computation. Deferred shading only computes shading of the pixels that will end up actually being visible -- this is achieved by two rendering passes:

  1. At first geometry is rendered without shading, only with information that is needed for shading (for example normals, material IDs, texture IDs etc.). The rendered image is stored in so called G-buffer which is basically an image in which every pixel stores the above mentioned shading information.
  2. The second pass applies the shading effects by applying the pixel/fragment shader on each pixel of the G-buffer.

This is especially effective when we're using very expensive/complex pixel/fragment shaders AND we have many overlapping objects. Sometimes deferred shading may be replaced by simply ordering the rendered models, i.e. rendering front-to-back, which may achieve practically the same speed up. In simple cases deferred shading may not even be worth it -- in LRS programs we may use it only rarely.

Deferred shading also comes with complications, for example rasterization anti aliasing can't be used because, of course, anti-aliasing in G-buffer doesn't really make sense. This is usually solved by some screen-space antialiasing technique such as FXAA, but of course that may be a bit inferior. Transparency also poses an issue.



Democracy stands for rule of the people, it is a form of government that somehow lets all citizens collectively make political decisions, which is usually implemented by voting but possibly also by other means. The opposite of democracy is autocracy (for example dictatorship), the absolute rule of a single individual. Democracy may take different forms, e.g. direct (people directly vote on specific questions) or representative (people vote for officials who then make decisions on their behalf).

Democracy does NOT equal voting, even though this simplification is too often made. Voting doesn't imply democracy and democracy doesn't require voting, an alternative to voting may be for example a scientifically made decision. Democracy in the wide sense doesn't even require a state or legislation -- true democracy simply means that rules and actions of a society are controlled by all the people and in a way that benefits all the people. Even though we are led to believe we live in democratic society, the truth is that a large scale largely working democracy has never been established and that nowadays most of so called democracy is just an illusion as society clearly works for the benefit of the few richest and most powerful people while greatly abusing everyone else, especially the poorest majority of people. We do NOT live in true democracy. A true democracy would be achieved by ideal models of society such as those advocated by (true) anarchism or LRS, however some anarchists may be avoiding the use the term democracy as that in many narrower contexts implies an existence of government.

Nowadays the politics of most first world countries is based on elections and voting by people, but despite this being called democracy by the propaganda the reality is de facto not a democracy but rather an oligarchy that rules THROUGH (not by) the people, creating an illusion of democracy which however lacks a real choice (e.g. the US two party system in which people can either vote for capitalists or capitalists) or pushes the voters towards a certain choice by huge propaganda, misinformation and manipulation.

Voting may be highly ineffective and even dangerous. We have to realize that sometimes voting is awesome, but sometimes it's an extremely awful idea. Why? Consider the two following scenarios:



Demoscene is a hacker art subculture revolving around making so called demos, programs that produce rich and interesting audiovisual effects and which are sometimes limited by strict size constraints (so called intros). The scene originated in northern Europe sometime in 1980s (even though things like screen hacks existed long before) among groups of crackers who were adding small signature effect screens into their cracked software (like "digital graffiti"); programming of these cool effects later became an art of its own and started to have their own competitions (sometimes with high financial prizes), so called compos, at dedicated real life events called demoparties (which themselves evolved from copyparties, real life events focused on piracy). The community is still centered mostly in the Europe (primarily Finland), it is underground, out of the mainstream; Wikipedia says that by 2010 its size was estimated to 10000 people (such people are called demosceners).

Demoscene is a bittersweet topic: on one side it's awesome, full of beautiful hacking, great ideas and minimalism, on the other side there are secretive people who don't share their source code (most demos are proprietary) and ugly unportable programs that exploit quirks of specific platforms -- common ones are DOS, Commodore 64, Amiga or Windows. These guys simply try to make the coolest visuals and smallest programs, with all good and bad that comes with it. Try to take only the good of it.

Besides "digital graffiti" the scene is also perhaps a bit similar to the culture of street rap, except that there's less improvisation (obviously, making a program takes long) and competition happens between groups rather than individuals. Nevertheless the focus is on competition, originality, style etc. But demos should show off technological skills as the highest priority -- trying to "win by content" rather than programming skills is sometimes frowned upon. Individuals within a demogroup have roles such as a programmer, visual artist, music artist, director, even PR etc.

A demo isn't a video, it is a non-interactive real time executable that produces the same output on every run (even though categories outside of this may also appear). Viznut has noted that this "static nature" of demos may be due to the established culture in which demos are made for a single show to the audience. Demos themselves aren't really limited by resource constraints (well, sometimes a limit such as 4 MB is imposed), it's where the programmers can show off all they have. However compos are often organized for intros, demos whose executable size is limited (i.e. NOT the size of the source code, like in code golfing, but the size of the compiled binary). The main categories are 4Kib intros and 64Kib intros, rarely also 256Kib intros (all sizes are in kibibytes). Apparently even such categories as 256 byte intro appear. Sometimes also platform may be specified (e.g. Commodore 64, PC etc.). The winner of a compo is decided by voting.

Some of the biggest demoparties are or were Assembly (Finland), The Party (Denmark), The Gathering (Norway), Kindergarden (Norway) and Revision (Germany). A guy on https://mlab.taik.fi/~eye/demos/ says that he has never seen a demo female programmer and that females often have free entry to demoparties while men have to pay because there are almost no women anyway xD Some famous demogroups include Farbrausch (Germany, also created a tiny 3D shooter game .kkrieger), Future Crew (Finland), Pulse (international), Haujobb (international), Conspiracy (Hungary) and Razor 1911 (Norway). { Personally I liked best the name of a group that called themselves Byterapers. ~drummyfish } There is an online community of demosceners at at https://www.pouet.net.

On technological side of demos: great amount of hacking, exploitation of bugs and errors and usage of techniques going against "good programming practices" are made use of in making of demos. They're usually made in C, C++ or assembly (though some retards even make demos in Java lmao). In intros it is extremely important to save space wherever possible, so things such as procedural generation and compression are heavily used. Manual assembly optimization for size can take place. Tracker music, chiptune, fractals and ASCII art are very popular. New techniques are still being discovered, e.g. bytebeat. GLSL shader source code that's to be embedded in the executable has to be minified or compressed. Compiler flags are chosen so as to minimize size, e.g. small size optimization (-Os), turning off buffer security checks or turning on fast float operations. The final executable is also additionally compressed with specialized executable compression.

See Also



Dependency is something your program (or similar system) depends on -- dependencies are bad! Among programmers the term dependency hell refers to a very common situation of having to deal with the headaches of managing dependencies. Unfortunately dependencies are also unavoidable. We at least try to minimize dependencies as much as possible while keeping our program functioning as intended, and those we can't avoid we try to abstract (see portability) in order to be able to quickly drop-in replace them with alternatives.

Having many dependencies is a sign of bloat and bad design. Unfortunately this is the reality of mainstream programming. For example at the time of writing this Chromium in Debian requires (recursively) 395 packages LMAO xD And these are just runtime dependencies...

In software development context we usually talk about software dependencies, typically libraries and other software packages. However, there are many other types of dependencies we need to consider when striving for the best programs. Let us list just some of the possible types:

Good program will take into account all kinds of these dependencies and try to minimize them to offer freedom, stability and safety while keeping its functionality or reducing it only very little.

Why are dependencies so bad? Because your program is for example:

How to Avoid Them




"God doesn't play dice." --some German dude

Deterministic system (such as a computer program or an equation) is one which over time evolves without any involvement of randomness and probability; i.e. its current state along with the rules according to which it behaves unambiguously and precisely determine its following states. This means that a deterministic algorithm will always give the same result if run multiple times with the same input values. Determinism is an extremely important concept in computer science and programming (and in many other fields of science and philosophy).

Determinism is also a philosophical theory and aspect of physics theories -- here it signifies that our Universe is deterministic, i.e. that everything is already predetermined by the state of the universe and the laws of physics, i.e. that we don't have "free will" (whatever it means) because our brains are just machines following laws of physics like any other matter etc. Many normies believe quantum physics disproves determinism which is however not the case, there may e.g. exist hidden variables that still make quantum physics deterministic -- some believe the Bell test disproved hidden variables but again this is NOT the case as it relies on statistical independence of the experimenters, determinism is already possible if we consider the choices of experimenters are also predetermined (this is called superdeterminism). Einstein and many others still believed determinism was the way the Universe works even after quantum physics emerged. { This also seems correct to me. Sabine Hossenfelder is another popular physicist promoting determinism. ~drummyfish } Anyway, this is already beyond the scope of technological determinism.

Computers are mostly deterministic by nature and design, they operate by strict rules and engineers normally try to eliminate any random behavior as that is mostly undesirable (with certain exceptions mentioned below) -- randomness leads to hard to detect and hard to fix bugs, unpredictability etc. Determinism has furthermore many advantages, for example if we want to record a behavior of a deterministic system, it is enough if we record only the inputs to the system without the need to record its state which saves a great amount of space -- if we later want to replay the system's behavior we simply rerun the system with the recorded inputs and its behavior will be the same as before (this is exploited e.g. in recording gameplay demos in video games such as Doom).

Determinism can however also pose a problem, notable e.g. in cryptography where we DO want true randomness e.g. when generating seeds. Determinism in this case implies an attacker knowing the conditions under which we generated the seed can exactly replicate the process and arrive at the seed value that's supposed to be random and secret. For this reason some CPUs come with special hardware for generating truly random numbers.

Despite the natural determinism of computers as such, computer programs nowadays aren't always automatically deterministic -- if you're writing a typical interactive computer program under some operating system, you have to make some extra bit of effort to make it deterministic. This is because there are things such as possible difference in timings or not perfectly specified behavior of floating point types in your language; for example a game running on slower computer will render fewer frames per second and if it has FPS-dependent physics, the time step of the physics engine will be longer on this computer, possibly resulting in slightly different physics behavior due to rounding errors. This means that such program run with the same input data will produce different results on different computers or under slightly different circumstances, i.e. it would be non-deterministic.

Nevertheless we almost always want our programs to be deterministic (or at least deterministic under some conditions, e.g. on the specific hardware platform we are using), always try to make your programs deterministic unless you have a VERY good reason not to! It doesn't take a huge effort to achieve determinism, it's more of just taking the right design decisions (e.g. separating rendering and physics simulation), i.e. good programming leads to determinism and vice versa, determinism in your program indicates good programming. The reason why we want determinism is that such programs have great properties, e.g. that of easier debugging (bugs are reproducible just by knowing the exact inputs), easy and efficient recording of activity (e.g. demos in games), sometimes even time reversibility (like undos, but watch out -- this doesn't hold in general!). Determinism also itself serves as a kind of a test if the program is working right -- if your program can take recorded inputs and reproduce same behavior at every run, it shows that it's written well, without things like undefined behavior affecting its behavior.

{ The previous paragraph is here because I've talked to people who thought that determinism was some UBER feature that requires a lot of work and so on ("OMG Trackmania is deterministic, what a feat!") -- this is NOT the case. It may intuitively seem so to non-programmers or beginners, but really this is not the case. Non-determinism in software appears usually due to a fuck up, ignorance or bad design choice made by someone with a low competence. Trust me, determinism is practically always the correct way of making programs and it is NOT hard to do. ~drummyfish }

Even if we're creating a program that somehow works with probability, we usually want to make it deterministic! This means we don't use actual random numbers but rather pseudorandom number generators that output chaotic values which simulate randomness, but which will nevertheless be exactly the same when ran multiple times with the same initial seed. This is again important e.g. for debugging the system in which replicating the bug is key to fixing it. If under normal circumstances you want the program to really behave differently in each run, you make it so only by altering its initial random seed.

In theoretical computer science non-determinism means that a model of computation, such as a Turing machine, may at certain points decide to make one of several possible actions which is somehow most convenient, e.g. which will lead to finding a solution in shortest time. Or in other words it means that the model makes many computations, each in different path, and at the end we conveniently pick the "best" one, e.g. the fastest one. Then we may talk e.g. about how the computational strength or speed of computation differ between a deterministic and non-deterministic Turing machine etc.

Determinism does NOT guarantee reversibility, i.e. if we know a state of a deterministic system, it may not always be possible to say from which state it evolved, or in other words: a system that's deterministic may or may not be deterministic in reverse time direction. This reversibility is only possible if the rules of the system are such that no state can evolve from two or more different states. If this holds then it is always possible to time-reverse the system and step it backwards to its initial state. This may be useful for things such as undos in programs. Also note that even if a system is reversible, it may be computationally very time consuming and sometimes practically impossible to reverse the system (imagine e.g. reversing a cryptographic hash -- mathematical reversibility of such hash may be arbitrarily ensured by e.g. pairing each hash with the lowest value that produces it).

Is floating point deterministic? In theory even floating point arithmetic can of course be completely deterministic but there is the question of whether this holds about concrete specifications and implementations of floating point (e.g. in different programming languages) -- here in theory non-determinism may arise e.g. by some unspecified behavior such as rounding rules. In practice you can't rely on float being deterministic. The common float standard, IEEE 754, is basically deterministic, including rounding etc. (except for possible payload of NaNs, which shouldn't matter in most cases), but this e.g. doesn't hold for floating point types in C!



Devuan is a GNU/Linux distribution that's practically ideantical to Debian (it is its fork) but without systemd as well as without packages that depend on the systemd malware. Devuan offers a choice of several init systems, e.g. openrc, sysvinit and runit. It was first released in 2017.

Notice how Devuan rhymes less with lesbian than Debian.

Despite some flaws (such as being Linux with all the bloat and proprietary blobs), Devuan is still one of the best operating systems for most people and it is at this time recommended by us over most other distros not just for avoiding systemd, but mainly for its adoption of Debian free software definition that requires software to be free as a whole, including its data (i.e. respecting also free culture). It is also a nicely working unix system that's easy to install and which is still relatively unbloated.

{ I can recommend Devuan, I've been using it as my main OS for several years. ~drummyfish }



Digital technology is that which works with whole numbers, i.e. discrete values, as opposed to analog technology which works with real numbers, i.e. continuous values (note: do not confuse things such as floating point with truly continuous values!). The name digital is related to the word digit as digital computers store data by digits, e.g. in 1s and 0s if they work in binary.

Normies confuse digital with electronic or think that digital computers can only be electronic, that digital computers can only work in binary or have other weird assumptions whatsoever. This is indeed false! An abacus is digital device. Fucking normies.

The advantage of digital technology is its resilience to noise which prevents degradation of data and accumulation of error -- if a digital picture is copied a billion times, it will very likely remain unchanged, whereas performing the same operation with analog picture would probably erase most of the information it bears due to loss of quality in each copy. Digital technology also makes it easy and practically possible to create fully programmable general purpose computers of great complexity.

Digital vs analog, simple example: imagine you draw two pictures with a pencil: one in a normal fashion on a normal paper, the other one on a grid paper, by filling specific squares black. The first picture is analog, i.e. it records continuous curves and position of each point of these curves can be measured down to extremely small fractions of millimeters -- the advantage is that you are not limited by any grid and can draw any shape at any position on the paper, make any wild curves with very fine details, theoretically even microscopic ones. The other picture (on a square grid) is digital, it is composed of separate points whose position is described only by whole numbers (x and y coordinates of the filled grid squares), the disadvantage is that you are limited by only being able to fill squares on predefined positions so your picture will look blocky and limited in amount of detail it can capture (anything smaller than a single grid square can't be captured properly), the resolution of the grid is limited, but as we'll see, imposing this limitations has advantages. Consider e.g. the advantage of the grid paper image with regards to copying: if someone wants to copy your grid paper image, it will be relatively easy and he can copy it exactly, simply by filling the exact same squares you have filled -- small errors and noise such as imperfectly filled squares can be detected and corrected thanks to the fact that we have limited ourselves with the grid, we know that even if some square is not filled perfectly, it was probably meant to be filled and we can eliminate this kind of noise in the copy. This way we can copy the grid paper image a million times and it won't change. On the other hand the normal, non-grid image will become distorted with every copy and in fact even the original image will become distorted by aging; even if that who is copying the image tries to trace it extremely precisely, small errors will appear and these errors will accumulate in further copies, and any noise that appears in the image or in the copies is a problem because we don't know if it really is a noise or something that was meant to be in the image.

Of course, digital data may become distorted too, it is just less likely and it's easier to deal with this. It for example happens that space particles (and similar physics phenomena, e.g. some quantum effects) flip bits in computer memory, i.e. there is always a probability of some bit flipping from 0 to 1 or vice versa. We call this data corruption. This may also happen due to physical damage to digital media (e.g. scratches on the surface of CDs), imperfections in computer network transmissions (e.g. packet loss over wifi) etc. However we can introduce further measures to prevent, detect and correct data corruption, e.g. by keeping redundant copies (2 copies of data allow detecting corruption, 3 copies allow even its correction), keeping checksums or hashes (which allow only detection of corruption but don't take much extra space), employing error correcting codes etc.

Another way in which digital data can degrade similarly to analog data is reencoding between lossy-compressed formats (in the spirit of the famous "needs more jpeg" meme). A typical example is digital movies: as new standard for video encoding are emerging, old movies are being reconverted from old formats to the new ones, however as video is quite heavily lossy-compressed, losses and distortion of information happens between the reencodings. This is best seen in videos and images circulating on the internet that are constantly being ripped and converted between different formats. This way it may happen that digital movies recorded nowadays may only survive into the future in very low quality, just like old analog movies survived until today in degraded quality. This can be prevented by storing the original data only with lossless compression and with each new emerging format create the release of the data from the original.


Digital Signature

Digital signature is a method of mathematically (with cryptographical algorithms) proving that, with a very high probability, a digital message or document has been produced by a specific sender, i.e. it is something aka traditional signature which gives a "proof" that something has been written by a specific individual.

It works on the basis of asymmetric cryptography: the signature of a message is a pair of a public key and a number (the signature) which can only have been produced by the owner of the private key associated with the public key. This signature is dependent on the message data itself, i.e. if the message is modified, the signature will no longer be valid, preventing anyone who doesn't posses the private key from modifying the message. The signature number can for example be a hash of the message decoded with the private key -- anyone can check that the signature encoded with the public key gives the document hash, proving that whoever computed the signature number must have possessed the private key.

Signatures can be computed e.g. with the RSA algorithm.

The nice thing here is that anonymity can be kept with digital signatures; no private information such as the signer's real name is required to be revealed, only his public key. Someone may ask why we then even sign documents if we don't know by whom it is signed lol? But of course the answer is obvious: many times we don't need to know the identity of the signer, we just need to know that different messages have all been written by the same person, and this is what a digital signature can ensure. And of course, if we want, a public key can have a real identity assigned if desirable, it's just that it's not required.



In the hacker jargon dinosaur is a type of a big, very old, mostly non-interactive (batch), possibly partly mechanical computer, usually an IBM mainframe from 1940s and 1950s (so called Stone Age). They resided in dinosaur pens (mainframe rooms).

{ This is how I understood it from the Jargon File. ~drummyfish }



Distance is a measure of how far away from each other two points are. Most commonly distance refers to physical separation in space, e.g. as in distance of planets from the Sun, but more generally distance can refer to any kind of parameter space and in any number of dimensions, e.g. the distance of events in time measured in seconds (1D distance) or distance of two text strings as the amount of their dissimilarity (Levenshtein distance). Distances are extremely important in computer science and math as they allow us to do such things as clustering, path searching, physics simulations, various comparisons, sorting etc.

Distance is similar/related to length, the difference is that distance is computed between two points while length is the distance of one point from some implicit origin.

There are many ways to define distance within given space. Most common and implicitly assumed distance is the Euclidean distance (basically the "straight line from point A to point B" whose length is computed with Euclidean Theorem), but other distances are possible, e.g. the taxicab distance (length of the kind of perpendicular path taxis take between points A and B in Manhattan, usually longer than straight line). Mathematically a space in which distances can be measured are called metric spaces, and a distance within such space can be any function dist (called a distance or metric function) that satisfies these axioms:

  1. dist(p,p) = 0 (distance from identical point is zero)
  2. Values given by dist are never negative.
  3. dist(p,q) = dist(q,p) (symmetry, distance between two points is the same in both directions).
  4. dist(a,c) <= dist(a,b) + dist(b,c) (triangle inequality)


Computing Euclidean distance requires multiplication and most importantly square root which is usually a pretty slow operation, therefore many times we look for simpler approximations. Note that a possible approach here may also lead through computing the distance normally but using a fast approximation of the square root.

Two very basic and rough approximations of Euclidean distance, both in 2D and 3D, are taxicab (also Manhattan) and Chebyshev distances. Taxicab distance simply adds the absolute coordinate differences along each principal axis (dx, dy and dz) while Chebyshev takes the maximum of them. In C (for generalization to 3D just add one coordinate of course):

int distTaxi(int x0, int y0, int x1, int y1)
  x0 = x1 > x0 ? x1 - x0 : x0 - x1; // dx
  y0 = y1 > y0 ? y1 - y0 : y0 - y1; // dy
  return x0 + y0;

int distCheb(int x0, int y0, int x1, int y1)
  x0 = x1 > x0 ? x1 - x0 : x0 - x1; // dx
  y0 = y1 > y0 ? y1 - y0 : y0 - y1; // dy
  return x0 > y0 ? x0 : y0;

Both of these distances approximate a circle in 2D with a square or a sphere in 3D with a cube, the difference is that taxicab is an upper estimate of the distance while Chebyshev is the lower estimate. For speed of execution (optimization) it may also be important that taxicab distance only uses the operation of addition while Chebyshev may result in branching (if) in the max function which is usually not good for performance.

A bit more accuracy can be achieved by averaging the taxicab and Chebyshev distances which in 2D approximates a circle with an 8 segment polygon and in 3D approximates a sphere with 24 sided polyhedron. The integer-only C code is following:

int dist8(int x0, int y0, int x1, int y1)
  x0 = x1 > x0 ? x1 - x0 : x0 - x1; // dx
  y0 = y1 > y0 ? y1 - y0 : y0 - y1; // dy
  return (x0 + y0 + (x0 > y0 ? x0 : y0)) / 2;

{ The following is an approximation I came up with when working on tinyphysicsengine. While I measured the average and maximum error of the taxi/Chebyshev average in 3D at about 16% and 22% respectively, the following gave me 3% and 12% values. ~drummyfish }

Yet more accurate approximation of 3D Euclidean distance can be made with a 48 sided polyhedron. The principle is following: take absolute values of all three coordinate differences and order them by magnitude so that dx >= dy >= dz >= 0. This gets us into one of 48 possible slices of space (the other slices have the same shape, they just differ by ordering or signs of the coordinates but the distance in them is of course equal). In this slice we'll approximate the distance linearly, i.e. with a plane. We do this by simply computing the distance of our point from a plane that goes through origin and whose normal is approximately {0.8728,0.4364,0.2182} (it points in the direction that goes through the middle of space slice). The expression for the distance from this plane simplifies to simply 0.8728 * dx + 0.4364 * dy + 0.2182 * dz. The following is an integer-only implementation in C (note that the constants above have been converted to allow division by 1024 for possible optimization of division to a bit shift):

int32_t dist48(
  int32_t x0, int32_t y0, int32_t z0,
  int32_t x1, int32_t y1, int32_t z1)
  x0 = x1 > x0 ? x1 - x0 : x0 - x1; // dx
  y0 = y1 > y0 ? y1 - y0 : y0 - y1; // dy
  z0 = z1 > z0 ? z1 - z0 : z0 - z1; // dz
  if (x0 < y0) // order the coordinates
    if (x0 < z0)
      if (y0 < z0)
      { // x0 < y0 < z0
        int32_t t = x0; x0 = z0; z0 = t;
      { // x0 < z0 < y0
        int32_t t = x0; x0 = y0; y0 = t;
        t = z0; z0 = y0; y0 = t;
    { // z0 < x0 < y0
      int32_t t = x0; x0 = y0; y0 = t;
    if (y0 < z0)
      if (x0 < z0)
      { // y0 < x0 < z0
        int32_t t = y0; y0 = z0; z0 = t;
        t = x0; x0 = y0; y0 = t;  
      { // y0 < z0 < x0
        int32_t t = y0; y0 = z0; z0 = t;
  return (893 * x0 + 446 * y0 + 223 * z0) / 1024;

A similar approximation for 2D distance is (from a 1984 book Problem corner) this: sqrt(dx^2 + dy^2) ~= 0.96 * dx + 0.4 * dy for dx >= dy >= 0. The error is <= 4%. This can be optionally modified to use the closest power of 2 constants so that the function becomes much faster to compute, but the maximum error increases (seems to be about 11%). C code with fixed point follows (commented out line is the faster, less accurate version):

int dist2DApprox(int x0, int y0, int x1, int y1)
  x0 = x0 > x1 ? (x0 - x1) : (x1 - x0);
  y0 = y0 > y1 ? (y0 - y1) : (y1 - y0);
  if (x0 < y0)
    x1 = x0; // swap
    x0 = y0;
    y0 = x1;
  return (123 * x0 + 51 * y0) / 128; // max error = ~4%
  //return x0 + y0 / 2;              // faster, less accurate  

TODO: this https://www.flipcode.com/archives/Fast_Approximate_Distance_Functions.shtml


Dodleston Mystery

The Dodleston mystery regards a teacher Ken Webster who in 1984 supposedly started exchanging messages with people from the past and future, most notably people from the 16th and 22nd century, via files on a BBC micro computer. While probably a hoax and creepypasta, there are some interesting unexplained details... and it's a fun story.

The guy has written a proprietary book about it, called The Vertical Plane.

{ If the story is made up and maybe even if it isn't it may be a copyright violation to reproduce the story with all the details here so I don't know if I should, but reporting on a few facts probably can't hurt. Yes, this is how bad the copyrestriction laws have gotten. ~drummyfish }



Here is the dog! He doesn't judge you; dog love is unconditional. No matter who you are or what you ever did, this buddy will always love you and be your best friend <3 By this he is giving us a great lesson.

He loves when you pet him and take him for walks, but most of all he probably enjoys to play catch :) Throw him a ball!

Send this to anyone who's feeling down :)

  _     /  \
 ((    / 0 0)
  \\___\/ _o)
  (        |  WOOOOOOOF
  | /___| |(
  |_)_) |_)_)

See Also



Doom is a legendary video game released in 1993, perhaps the most famous game of all time, the game that popularized the first person shooter genre and shocked by its at the time extremely advanced 3Dish graphics. It was made by Id Software, most notably by John Carmack (graphics + engine programmer) and John Romero (tool programmer + level designer). Doom is sadly proprietary, it was originally distributed as shareware (a free "demo" was available for playing and sharing with the option to buy a full version). However the game engine was later (1999) released as free (as in freedom) software under GPL which gave rise to many source ports. The assets remain non-free but a completely free alternative is offered by the Freedoom project that has created free as in freedom asset replacements for the game. Anarch is an official LRS game inspired by Doom, completely in the public domain.

{ Great books about Doom I can recommend: Masters of Doom (about the development) and Game Engine Black Book: Doom (details about the engine internals). ~drummyfish }

Partially thanks to the free release of the engine and its relatively suckless design (C language, software rendering, ...), Doom has been ported, both officially and unofficially, to a great number of platforms (e.g. Gameboy Advance, PS1, even SNES) and has become a kind of de facto standard benchmark for computer platforms -- you will often hear the phrase: "but does it run Doom?" Porting a Doom to any platform has become kind of a meme, someone allegedly even ported it to a pregnancy test (though it didn't actually run on the test, it was really just a display). { Still Anarch may be even more portable than Doom :) ~drummyfish }

The Doom engine was revolutionary and advanced (not only but especially) video game graphics by a great leap, considering its predecessor Wolf3D was really primitive in comparison (Doom basically set the direction for future trends in games such as driving the development of more and more powerful GPUs in a race for more and more impressive visuals). Doom used a technique called BSP rendering that was able to render realtime 3D views of textured environments with distance fog and enemies and items represented by 2D billboards ("sprites"). No GPU acceleration was used, graphics was rendered purely with CPU (so called software rendering, GPU rendering would come with Doom's successor Quake). This had its limitations, for example the camera could not tilt up and down and the levels could not have rooms above other rooms. For this reason some call Doom "pseudo 3D" or 2.5D rather than "true 3D". Nevertheless, though with limitations, Doom did present 3D views and internally it did work with 3D coordinates (for example the player or projectiles have 2D position plus height coordinate), despite some dumb YouTube videos saying otherwise. For this reason we prefer to call Doom a primitive 3D engine, but 3D nonetheless. However Doom was not just a game with good graphics, it had extremely good gameplay, legendary music and art style and introduced the revolutionary deathmatch multiplayer, as well as a HUGE modding and mapping community. It was a success in every way -- arguably no other game has since achieved a greater revolution than Doom.

Doom source code is written in C89 and is about 36000 lines of code long. The original system requirements needed roughly a 30 MHz CPU and 4 MB RAM as a minimum. It had 27 levels (9 of which were shareware), 8 weapons and 10 enemy types.

The game only used fixed point, no float!

Doom also has a deterministic FPS-independent physics which allows for efficient recording of demos of its gameplay and creating tool assisted speedruns, i.e. the time step of game simulation is fixed (35 tics per second). Such demos can be played back in high quality while being minuscule in size and help us in many other ways, for example for verifying validity of speedruns. This is very nice and serves as an example of a well written engine (unlike later engines from the same creators, e.g. those of Quake games which lacked this feature).

LOL someone created a Doom system monitor for Unix systems called psDooM where the monsters in game are the operating system processes and killing the monsters kills the processes.


Double Buffering

In computer graphics double buffering is a technique of rendering in which we do not draw directly to video RAM, but instead to a second "back buffer", and only copy the rendered frame from back buffer to the video RAM ("front buffer") once the rendering has been completed; this prevents flickering and displaying of incompletely rendered frames on the display. Double buffering requires a significant amount of extra memory for the back buffer, however it is also necessary for how graphics is rendered today.

In most libraries and frameworks today you don't have to care about double buffering, it's done automatically. For this reason in many frameworks you often need to indicate the end of rendering with some special command such as flip, endFrame etc. If you're going lower level, you may need to implement double buffering yourself.

Though we encounter the term mostly in computer graphics, the principle of using a second buffer in order to ensure the result is presented only when it's ready can be applied also elsewhere.

Let's take a small example: say we're rendering a frame in a 3D game. First we render the environment, then on top of it we render the enemies, then effects such as explosions and then at the top of all this we render the GUI. Without double buffering we'd simply be rendering all these pixel into the front buffer, i.e. the memory that is immediately shown on the display. This would lead to the user literally seeing how first the environment appears, then enemies are drawn over it, then effects and then the GUI. Even if all this redrawing takes an extremely short time, it is also the case that the final frame will be shown for a very short time before another one will start appearing, so in the result the user will see huge flickering: the environment may look kind of normal but the enemies, effects and GUI may appear transparent because they are only visible for a fraction of the frame. The user also might be able to see e.g. enemies that are supposed to be hidden behind some object if that object is rendered after the enemies. With double buffering this won't happen as we perform the rendering into the back buffer, a memory which doesn't show on the display. Only when we have completed the frame in the back buffer, we copy it to the front buffer, pixel by pixel. Here the user may see the display changing from the old frame to the new one from top to the bottom, but he will never see anything temporary, and since the old and new frames are usually very similar, this top-to-bottom update may not even be distracting (it is addressed by vertical synchronization if we really want to get rid of it).

There also exists triple buffering which uses yet another additional buffer to increase FPS. With double buffering we can't start rendering a new frame into back buffer until the back buffer has been copied to the front buffer which may further be delayed by vertical synchronization, i.e. we have to wait and waste some time. With triple buffering we can start rendering into the other back buffer while the other one is being copied to the front buffer. Of course this consumes significantly more memory. Also note that triple buffering can only be considered if the hardware supports parallel rendering and copying of data, and if the FPS is actually limited by this... mostly you'll find your FPS bottleneck is elsewhere in which case it makes no sense to try to implement triple buffering. On small devices like embedded you probably shouldn't even think about this.

Double buffering can be made more efficient by so called page flipping, i.e. allowing to switch the back and front buffer without having to physically copy the data, i.e. by simply changing the pointer of a display buffer. This has to be somehow supported by hardware.

When do we actually need double buffering? Not always, we can avoid it or suppress its memory requirements if we need to, e.g. with so called frameless rendering -- we may want to do this e.g. in embedded programming where we want to save every byte of RAM. The mainstream computers nowadays simply always run on a very fast FPS and keep redrawing the screen even if the image doesn't change, but if you write a program that only occasionally changes what's on the screen (e.g. an e-book reader), you may simply leave out double buffering and actually render to the front buffer once the screen needs to change, the user probably won't notice any flicker during a single quick frame redraw. You also don't need double buffering if you're able to compute the final pixel color right away, for example with ray tracing you don't need any double buffering, unless of course you're doing some complex postprocessing. Double buffering is only needed if we compute a pixel color but that color may still change before the frame is finished. You may also only use a partial double buffer if that is possible (which may not be always): you can e.g. split the screen into 16 regions and render region by region, using only a 1/16th size double buffer. Using a palette can also make the back buffer smaller: if we use e.g. a 256 color palette, we only need 1 byte for every pixel of the back buffer instead of some 3 bytes for full RGB. The same goes for using a smaller resolution that is the actual native resolution of the screen.


Downto Operator

In C the so called "downto" operator is a joke played on nubs. It goes like this: Did you know C has a hidden downto operator -->? Try it:

#include <stdio.h>

int main(void)
  int n = 20;

  while (n --> 10) // n goes down to 10

  return 0;

Indeed this compiles and works. In fact --> is just -- and > operators.



Drummyfish (also tastyfish, drumy etc.) is a programmer, anarchopacifist and proponent of free software/culture, who started this wiki and invented the kind of software it focuses on: less retarded software (LRS). Besides others he has written Anarch, small3dlib, raycastlib, smallchesslib, tinyphysicsengine and SAF. He has also been creating free culture art and otherwise contributing to free projects such as OpenMW; he's been contributing with public domain art of all kind (2D, 3D, music, ...) and writings to Wikipedia, Wikimedia Commons, opengameart, libregamewiki, freesound and others. Drummyfish is crazy, suffering from anxiety/depression/etcetc. (diagnosed avoidant personality disorder), and has no real life, he is pretty retarded when it comes to leading projects or otherwise dealing with people or practical life. He is a wizard.

He loves all living beings, even those whose attributes he hates or who hate him. He is a vegetarian and here and there supports good causes, for example he donates hair and gives money to homeless people who ask for them.

Drummyfish has a personal website at www.tastyfish.cz, and a gopherhole at self.tastyfish.cz.

Drummyfish's real name is Miloslav Číž, he was born on 24.08.1990 and lives in Moravia, Czech Republic, Earth (he rejects the concept of a country/nationalism, the info here serves purely to specify a location). He is a more or less straight male of the white race. He started programming at high school in Pascal, then he went on to study compsci (later focused on computer graphics) in a Brno University of Technology and got a master's degree, however he subsequently refused to find a job in the industry, partly because of his views (manifested by LRS) and partly because of mental health issues (depressions/anxiety/avoidant personality disorder). He rather chose to stay closer to the working class and do less harmful slavery such as cleaning and physical spam distribution, and continues hacking on his programming (and other) projects in his spare time in order to be able to do it with absolute freedom.

In 2019 drummyfish has written a "manifesto" of his ideas called Non-Competitive Society that describes the political ideas of an ideal society. It is in the public domain under CC0 and available for download online.

{ Why doxx myself? Following the LRS philosophy, I believe information should be free. Censorship -- even in the name of privacy -- goes against information freedom. We should live in a society in which people are moral and don't abuse others by any means, including via availability of their private information. And in order to achieve ideal society we have to actually live it, i.e. slowly start to behave as if it was already in place. Of course, I can't tell you literally everything (such as my passwords etc.), but the more I can tell you, the closer we are to the ideal society. ~drummyfish }

He likes many things such as animals, peace, freedom, programming, math and games (e.g. Xonotic and OpenArena, even though he despises competitive behavior in real life).


Dynamic Programming

Dynamic programming is a programming technique that can be used to make many algorithms more efficient (faster). It works on the principle of repeatedly breaking given problem down into smaller subproblems and then solving one by one from the simplest and remembering already calculated results that can be reused later.

It is usually contrasted to the divide and conquer (DAC) technique which at the first sight looks similar but is in fact quite different. DAC also subdivides the main problem into subproblems, but then solves them recursively, i.e. it is a top-down method. DAC also doesn't remember already solved subproblem and may end up solving the same problem multiple times, wasting computational time. Dynamic programming on the other hand starts solving the subproblems from the simplest ones -- i.e. it is a bottom-up method -- and remembers solutions to already solved subproblems in some kind of a table which makes it possible to quickly reuse the results if such subproblem is encountered again. The order of solving the subproblems should be made such as to maximize the efficiency of the algorithm.

It's not the case that dynamic programming is always better than DAC, it depends on the situation. Dynamic programming is effective when the subproblems overlap and so the same subproblems WILL be encountered multiple times. But if this is not the case, DAC can easily be used and memory for the look up tables will be saved.


Let's firstly take a look at the case when divide and conquer is preferable. This is for instance the case with many sorting algorithms such as quicksort. Quicksort recursively divides parts of the array into halves and sorts each of those parts: sorting each of these parts is a different subproblem as these parts (at least mostly) differ in size, elements and their order. The subproblems therefore don't overlap and applying dynamic programming makes little sense.

But if we tackle a problem such as computing Nth Fibonacci number, the situation changes. Considering the definition of Nth Fibonacci number as a "sum of N-1th and N-2th Fibonacci numbers", we might naively try to apply the divide and conquer method:

int fib(int n)
  return (n == 0 || n == 1) ? 
    n : // start the sequence with 0, 1
    fib(n - 1) + fib(n - 2); // else add two previous

But we can see this is painfully slow as calling fib(n - 2) computes all values already computed by calling fib(n - 1) all over again, and this inefficiency additionally appears inside these functions recursively. Applying dynamic programming we get a better code:

int fib(int n)
  if (n < 2)
    return n;
  int current = 1, prev = 0;
  for (int i = 2; i <= n; ++i)
    int tmp = current;
    current += prev;
    prev = tmp;
  return current;

We can see the code is longer, but it is faster. In this case we only need to remember the previously computed Fibonacci number (in practice we may need much more memory for remembering the partial results).



Well, Earth is the planet we live on. It is the third planet from the Sun of our Solar system which itself is part of the Milky Way galaxy. So far it is the only known place to have life.

Now behold the grand rendering of the Earth map in ASCII (equirectangular projection):

X      v      v      v      v      v      v      v      v      v      v      v      v      v      v      v      X
                        .-,./"">===-_.----..----..      :     -==- 
                     -=""-,><__-;;;<""._         /      :                     -===-
    ___          .=---""""\/ \/ ><."-, "\      /"       :      .--._     ____   __.-""""------""""---.....-----..
> -=_  """""---""           _.-"   \_/   |  .-" /"\     :  _.''     "..""    """                                <
"" _.'ALASKA               {_   ,".__     ""    '"'   _ : (    _/|                                         _  _.. 
  "-._.--"""-._    CANADA    "--"    "\              / \:  ""./ /                                     _--"","/
   ""          \                     _/_            ",_/:_./\_.'                     ASIA            "--.  \/
>               }                   /_\/               \:EUROPE      __  __                           /\|       <
                \            ""=- __.-"              /"":_-. -._ _, /__\ \ (                       .-" ) >-
                 \__   USA      _/                   """:___"   "  ",     ""                   ,-. \ __//
                    |\      __ /                     /"":   ""._..../                          \  "" \_/
>                    \\_  ."  \|      ATLANTIC      /   :          \\   <'\                     |               <
                        \ \_/| -=-      OCEAN       )   :AFRICA     \\_.-" """\                .'
       PACIFIC           "--._\                     \___:            "/        \ .""\_  <^,..-" __
        OCEAN                 \"""-""-.._               :""\         /          "     | _)      \_\INDONESIA
                              |   SOUTH    \            :   /      |                 "-._\_  \__/  \  ""-_
                               \ AMERICA   /            :  (       }                     """""===-  """""_  
                                \_        |             :   \      \                          __.-""._,"",
>                                 \      /              :   /      / |\                     ," AUSTRALIA  \     <
                                  |     |               :   \     /  \/      INDIAN         ";   __        )
                                  |     /               :    \___/            OCEAN           """  ""-._  / 
                                 /     /                :                                               ""   |\
>                                |    /                 :                                               {)   // <
                                 |   |                  :                                                   ""
                                 \_  \                  :
                                   """                  :
>                                     .,                :                                                       <
                       __....___  _/""  \               :          _____   ___.......___......-------...__
--....-----""""----""""         ""      "-..__    __......--"""""""     """                              .;_..... 
                                              """"      : ANTARCTICA
X      ^      ^      ^      ^      ^      ^      ^      ^      ^      ^      ^      ^      ^      ^      ^      X


Easier Done Than Said

Easier done than said is the opposite of easier said than done.

Example: exhaling, as saying the word "exhaling" requires exhaling plus doing some extra work such as correctly shaping your mouth.


Easy To Learn, Hard To Master

"Easy to learn, hard to master" (ETLHTM) is a type of design of a game (and by extension a potential property of any art or skill) which makes it relatively easy to learn to play while mastering the play (playing in near optimal way) remains very difficult.

Examples of this are games such as tetris, minesweeper or Trackmania.

LRS sees the ETLHTM design as extremely useful and desirable as it allows for creation of suckless, simple games that offer many hours of fun. With this philosophy we get a great amount of value for relatively little effort.

This is related to a fun coming from self imposed goals, another very important and useful concept in games. Self imposed goals in games are goals the player sets for himself, for example completing the game without killing anyone (so called "pacifist" gameplay) or completing it very quickly (speedrunning). Here the game serves only as a platform, a playground at which different games can be played and invented -- inventing games is fun in itself. Again, a game supporting self imposed goals can be relatively simple and offer years of fun, which is extremely cool.

The simplicity of learning a game comes from simple rules while the difficulty of its mastering arises from the complex emergent behavior these simple rules create. Mastering of the game is many times encouraged by competition among different people but also competition against oneself (trying to beat own score). In many simple games such as minesweeper there exists a competitive scene (based either on direct matches or some measurement of skill such as speedrunning or achieving high score) that drives people to search for strategies and techniques that optimize the play, and to training skillful execution of such play.

The opposite is hard to learn, easy to master.

See Also



not to be confused with indoctrination




The Elo system (named after Arpad Elo, NOT an acronym) is a mathematical system for rating the relative strength of players of a certain game, most notably and widely used in chess but also elsewhere (video games, table tennis, ...). Based on number of wins, losses and draws against other Elo rated opponents, the system computes a number (rating) for each player that highly correlates with that player's current strength/skill; as games are played, ratings of players are constantly being updated to reflect changes in their strength. The numeric rating can then be used to predict the probability of a win, loss or draw of any two players in the system, as well as e.g. for constructing ladders of current top players and matchmaking players of similar strength in online games. For example if player A has an Elo rating of 1700 and player B 1400, player A is expected to win in a game with player B with the probability of 85%. Besides Elo there exist alternative and improved systems, notably e.g. the Glicko system (which further adds e.g. confidence intervals).

The Elo system was created specifically for chess (even though it can be applied to other games as well, it doesn't rely on any chess specific rules) and described by Arpad Elo in his 1978 book called The Rating of Chessplayers, Past and Present, by which time it was already in use by FIDE. It replaced older rating systems, most notably the Harkness system. Despite more "advanced" systems being around nowadays, Elo remains the most widely used one.

Elo rates only RELATIVE performance, not absolute, i.e. the rating number of a player says nothing in itself, it is only the DIFFERENCE in rating points between two players that matters, so in an extreme case two players rated 300 and 1000 in one rating pool may in another one be rated 10300 and 11000 (the difference of 700 is the only thing that stays the same, mean value can change freely). This may be influenced by initial conditions and things such as rating inflation (or deflation) -- if for example a chess website assigns some start rating to new users which tends to overestimate an average newcomer's abilities, newcomers will come to the site, play a few games which they will lose, then they ragequit but they've already fed their points to the good players, causing the average rating of a good player to grow over time.

Keep in mind Elo is a big simplification of reality, as is any attempt at capturing skill with a single number -- even though it is a very good predictor of something akin a "skill" and outcomes of games, trying to capture a "skill" with a single number is similar to e.g. trying to capture such a multidimensional thing as intelligence with a single dimensional IQ number. For example due to many different areas of a game to be mastered and different playstyles transitivity may be broken in reality: it may happen that player A mostly beats player B, player B mostly beats player C and player C mostly beats player A, which Elo won't capture.

How It Works

Initial rating of players is not specified by Elo, each rating organization applies its own method (e.g. assign an arbitrary value of let's say 1000 or letting the player play a few unrated games to estimate his skill).

Suppose we have two players, player 1 with rating A and player 2 with rating B. In a game between them player 1 can either win, i.e. score 1 point, lose, i.e. score 0 points, or draw, i.e. score 0.5 points.

The expected score E of a game between the two players is computed using a sigmoid function (400 is just a magic constant that's usually used, it makes it so that a positive difference of 400 points makes a player 10 times more likely to win):

E = 1 / (1 + 10^((B - A)/400))

For example if we set the ratings A = 1700 and B = 1400, we get a result E ~= 0.85, i.e in a series of many games player 1 will get an average of about 0.85 points per game, which can mean that out of 100 games he wins 85 times and loses 16 times (but it can also mean that out of 100 games he e.g. wins 70 times and draws 30). Computing the same formula from the player 2 perspective gives E ~= 0.15 which makes sense as the number of points expected to gain by the players have to add up to 1 (the formula says in what ratio the two players split the 1 point of the game).

After playing a game the ratings of the two players are adjusted depending on the actual outcome of the game. The winning player takes some amount of rating points from the loser (i.e. the loser loses the same amount of point the winner gains which means the total number of points in the system doesn't change as a result of games being played). The new rating of player 1, A2, is computed as:

A2 = A + K * (R - E)

where R is the outcome of the game (for player 1, i.e. 1 for a win, 0 for loss, 0.5 for a draw) and K is the change rate which affects how quickly the ratings will change (can be set to e.g. 30 but may be different e.g. for new or low rated players). So with e.g. K = 25 if for our two players the game ends up being a draw, player 2 takes 9 points from player 1 (A2 = 1691, B2 = 1409, note that drawing a weaker player is below the expected result).

Some Code

Here is a C code that simulates players of different skills playing games and being rated with Elo. Keep in mind the example is simple, it uses the potentially imperfect rand function etc., but it shows the principle quite well. At the beginning each player is assigned an Elo of 1000 and a random skill which is normally distrubuted, a game between two players consists of each player drawing a random number in range from from 1 to his skill number, the player that draws a bigger number wins (i.e. a player with higher skill is more likely to win).

#include <stdio.h>
#include <stdlib.h>
#include <math.h>

#define PLAYERS 101
#define GAMES 10000
#define K 25          // Elo K factor

typedef struct
  unsigned int skill;
  unsigned int elo;
} Player;

Player players[PLAYERS];

double eloExpectedScore(unsigned int elo1, unsigned int elo2)
  return 1.0 / (1.0 + pow(10.0,((((double) elo2) - ((double) elo1)) / 400.0)));

int eloPointGain(double expectedResult, double result)
  return K * (result - expectedResult);

int main(void)

  for (int i = 0; i < PLAYERS; ++i)
    players[i].elo = 1000; // give everyone inital Elo of 1000

    // normally distributed skill in range 0-99:
    players[i].skill = 0;

    for (int j = 0; j < 8; ++j)
      players[i].skill += rand() % 100;

    players[i].skill /= 8;

  for (int i = 0; i < GAMES; ++i) // play games
    unsigned int player1 = rand() % PLAYERS,
                 player2 = rand() % PLAYERS;

    // let players draw numbers, bigger number wins
    unsigned int number1 = rand() % (players[player1].skill + 1),
                 number2 = rand() % (players[player2].skill + 1);

    double gameResult = 0.5;

    if (number1 > number2)
      gameResult = 1.0;
    else if (number2 > number1)
      gameResult = 0.0;
    int pointGain = eloPointGain(eloExpectedScore(

    players[player1].elo += pointGain;
    players[player2].elo -= pointGain;

  for (int i = PLAYERS - 2; i >= 0; --i) // bubble-sort by Elo
    for (int j = 0; j <= i; ++j)
      if (players[j].elo < players[j + 1].elo)
        Player tmp = players[j];
        players[j] = players[j + 1];
        players[j + 1] = tmp;

  for (int i = 0; i < PLAYERS; i += 5) // print
    printf("#%d: Elo: %d (skill: %d\%)\n",i,players[i].elo,players[i].skill);

  return 0;

The code may output e.g.:

#0: Elo: 1134 (skill: 62%)
#5: Elo: 1117 (skill: 63%)
#10: Elo: 1102 (skill: 59%)
#15: Elo: 1082 (skill: 54%)
#20: Elo: 1069 (skill: 58%)
#25: Elo: 1054 (skill: 54%)
#30: Elo: 1039 (skill: 52%)
#35: Elo: 1026 (skill: 52%)
#40: Elo: 1017 (skill: 56%)
#45: Elo: 1016 (skill: 50%)
#50: Elo: 1006 (skill: 40%)
#55: Elo: 983 (skill: 50%)
#60: Elo: 974 (skill: 42%)
#65: Elo: 970 (skill: 41%)
#70: Elo: 954 (skill: 44%)
#75: Elo: 947 (skill: 47%)
#80: Elo: 936 (skill: 40%)
#85: Elo: 927 (skill: 48%)
#90: Elo: 912 (skill: 52%)
#95: Elo: 896 (skill: 35%)
#100: Elo: 788 (skill: 22%)

We can see that Elo quite nicely correlates with the player's real skill.


Elon Mu$k

Elon Musk is an enormous capitalist dick.


Musk's company Neuralink killed 1500 animals in 4 years, was charged with animal cruelty (sauce).



Euler's number (not to be confused with Euler number), or e, is an extremely important and one of the most fundamental numbers in mathematics, approximately equal to 2.72, and is almost as famous as pi. It appears very often in mathematics and nature, it is the base of natural logarithm, its digits after the decimal point go on forever without showing a simple pattern (just as those of pi), and it has many more interesting properties.

It can be defined in several ways:

e to 100 decimal digits is:


e to 100 binary digits is:


Just as pi, e is a real transcendental number (it is not a root of any polynomial equation) which also means it is an irrational number (it cannot be expressed as a fraction of integers). It is also not known whether e is a normal number, which would means its digits would contain all possible finite strings, but it is conjectured to be so.




"English Motherfucker, do you speak it?"

English is a natural human language spoken mainly in the USA, UK and Australia as well as in dozens of other countries and in all parts of the world. It is the default language of the world. It is a pretty simple and suckless language (even though not as suckless as Esperanto), even a braindead person can learn it { Knowing Czech and learning Spanish, which is considered one of the easier languages, I can say English is orders of magnitude simpler. ~drummyfish }. It is the lingua franca of the tech world and many other worldwide communities. Thanks to its simplicity (lack of declension, fixed word order etc.) it is pretty suitable for computer analysis and as a basis for programming languages.

If you haven't noticed, this wiki is written in English.



Entrepreneur is an individual practicing legal slavery and legal theft under capitalism; capitalists describe those actions by euphemisms such as "doing business". Successful entrepreneurs can also be seen as murderers as they consciously firstly hoard resources that poor people lack (including basic resources needed for living) and secondly cause and perpetuate situations such as the third world slavery where people die on a daily basis performing extremely difficult, dangerous and low paid work, so that the entrepreneur can buy his ass yet another private jet.


Esoteric Programming Language

So called esoteric programming languages (esolangs) are highly experimental and fun programming languages that employ bizarre ideas. Popular languages of this kind include Brainfuck, Chef or Omgrofl.

There is a wiki for esolangs, the Esolang Wiki. If you want to behold esolangs in all their beauty, see https://esolangs.org/wiki/Hello_world_program_in_esoteric_languages_(nonalphabetic_and_A-M). The Wiki is published under CC0!

Some notable ideas employed by esolangs are:

Esolangs are great because:


INTERCAL, made in 1972 by Donald Woods and James Lyon, is considered the first esolang in history: its goal was specifically intended to be different from traditional languages and so for example a level of politeness was introduced -- if there weren't enough PLEASE labels in the source code, the compiler wouldn't compile the program.

In 2005 esolang wiki was started.

Specific Languages

The following is a list of some notable esoteric languages.


Everyone Does It

"Everyone does it" is an argument quite often used by simps to justify their unjustifiable actions. It is often used alongside the "just doing my job" argument.

The argument has a valid use, however it is rarely used in the valid way. We humans, as well as other higher organisms, have evolved to mimic the behavior of others because such behavior is tried, others have tested such behavior for us (for example eating a certain plant that might potentially be poisonous) and have survived it, therefore it is likely also safe to do for us. So we have to realize that "everyone does it" is an argument for safety, not for morality. But people nowadays mostly use the argument as an excuse for their immoral behavior, i.e. something that's supposed to make bad things they do "not bad" because "if it was bad, others wouldn't be doing it". That's of course wrong, people do bad things and the argument "everyone does it" helps people do them, for example during the Nazi holocaust this excuse partially allowed some of the greatest atrocities in history. Nowadays during capitalism it is used to excuse taking part unethical practices, e.g. those of corporations.

So if you tell someone "You shouldn't do this because it's bad" and he replies "Well, everyone does it", he's really (usually) saying "I know it's bad but it's safe for me to do".

The effect is of course abused by politicians: once you get a certain number of people moving in a certain shared direction, others will follow just by the need to mimic others. Note that just creating an illusion (using the tricks of marketing) of "everyone doing something" is enough -- that's why you see 150 year old grannies in ads using modern smartphones -- it's to force old people into thinking that other old people are using smartphones so they have to do it as well.

Another potentially valid use of the argument is in the meaning of "everyone does it so I am FORCED to do it as well". For example an employer could argue "I have to abuse my employees otherwise I'll lose the edge on the market and will be defeated by those who continue to abuse their employees". This is very true but it seems like many people don't see or intend this meaning.



Evil always wins in the end.



Here let be listed exercises for the readers of the wiki. You can allow yourself to as many helpers and resources as you find challenging: with each problem you should either find out you know the solution or learn something new while solving it.

Problems in each category should follow from easiest to most difficult. The listed solutions may not be the only possible solutions, just one of them.

General Knowledge

  1. What is the difference between free software and open source?


  1. The free software and open source movements are technically very similar but extremely different in spirit, i.e. while most free software licenses are also open source and vice versa (with small exceptions such as CC0), free software is fundamentally about pursuing user freedom and ethics while open source is a later capitalist fork of free software that removes talk about ethics, aims to exploit free licenses for the benefit of business and is therefore unethical.


  1. Write a C program that prints out all prime numbers under 1000 as well as the total count of these prime numbers.



// Sieve of Eratosthenes algorithm, one possible way to generate prime numbers
#include <stdio.h>
#define N 1000

char primeMap[N];

int main(void)
  int primeCount = 0;

  for (int i = 0; i < N; ++i)
    primeMap[i] = 1;

  for (int i = 2; i < N; ++i)
    if (primeMap[i])

    int j = i;

    while (1) // mark all multiples of i non-primes
      j += i;

      if (j >= N)

      primeMap[j] = 0; // can't be a prime

  printf("prime count under %d: %d\n",N,primeCount);

  return 0;




Free To Play

Free to play (F2P) is a "business model" of predatory proprietary games that's based on the same idea as giving children free candy so that they get into your van so that you can rape them.



"Facebook has no users, it only has useds." --rms




Faggot is a synonym for gay.


Type A/B Fail

Type A and type B fails are two very common cases of failing to adhere to the LRS politics/philosophy by only a small margin. Most people don't come even close to LRS politically or by their life philosophy -- these are simply general failures. Then there a few who ALMOST adhere to LRS politics and philosophy but fail in an important point, either by being/supporting pseudoleft (type A fail) or being/supporting right (type B fail). The typical cases are following (specific cases may not fully fit these, of course):

Type A/B fails are the "great filter" of the rare kind of people who show a great potential for adhering to LRS. It may be due to the modern western culture that forces a right-pseudoleft false dichotomy that even those showing a high degree of non-conformance eventually slip into the trap of being caught by one of the two poles. These two fails seem to be a manifestation of an individual's true motives of self interest which is culturally fueled with great force -- those individuals then try to not conform and support non-mainstream concepts like free culture or sucklessness, but eventually only with the goal of self interest. It seems to be extremely difficult to abandon this goal, much more than simply non-conforming. Maybe it's also the subconscious knowledge that adhering completely to LRS means an extreme loneliness; being type A/B fail means being a part of a minority, but still a having a supportive community, not being completely alone.

However these kinds of people may also pose a hope: if we could educate them and "fix their failure", the LRS community could grow rapidly. If realized, this step could even be seen as the main contribution of LRS -- uniting the misguided rightists and pseudoleftists by pointing out errors in their philosophies (errors that may largely be intentionally forced by the system anyway exactly to create the hostility between the non-conforming, as a means of protecting the system).

                .'  '.                  
               /      \                   drummyfish
            _.'        '._                    |
___....---''              ''---....___________v___
                               |           |
             normies           |    A/B    | LRS
               FAIL            |    fail   |


Fantasy Console

Fantasy console, also fantasy computer, is a software platform intended mainly for creating and playing simple games, which imitates parameters, simplicity and look and feel of classic retro consoles such as GameBoy. These consoles are called fantasy because they are not emulators of already existing hardware consoles but rather "dreamed up" platforms, virtual machines made purely in software with artificially added restrictions that a real hardware console might have. These restrictions limit for example the resolution and color depth of the display, number of buttons and sometimes also computational resources.

The motivation behind creating fantasy consoles is normally twofold: firstly the enjoyment of retro games and retro programming, and secondly the immense advantages of simplicity. It is much faster and easier to create a simple game than a full fledged PC game, this attracts many programmers, simple programming is also more enjoyable (fewer bugs and headaches) and simple games have many nice properties such as small size (playability over web), easy embedding or enabling emulator-like features.

Fantasy consoles usually include some kind of simple IDE; a typical mainstream fantasy console both runs and is programmed in a web browser so as to be accessible to normies. They also use some kind of easy scripting language for game programming, e.g. Lua. Even though the games are simple, the code of such a mainstream console is normally bloat, i.e. we are talking about pseudominimalism. Nevertheless some consoles, such as SAF, are truly suckless, free and highly portable (it's not a coincidence that SAF is an official LRS project).

Notable Fantasy Consoles

The following are a few notable fantasy consoles.

name license game lang. parameters comment
CToy zlib C 128x128 suckless
LIKO-12 MIT Lua 192x128
PICO-8 propr. Lua 128x128 4b likely most famous
PixelVision8 MS-PL (FOSS) Lua 256x240 written in C#
Pyxel MIT Python 256x256 4b
SAF CC0 C 64x64 8b LRS, suckless
TIC-80 MIT Lua, JS, ... 240x136 4b paid "pro" version
Uxn MIT Tal very minimal

See Also


Frequently Asked Questions

Not to be confused with fuck or frequently questioned answers.

{ answers by ~drummyfish }

Is this a joke? Are you trolling?

No. Jokes are here.

What the fuck?

See WTF.

How does LRS differ from suckless, KISS, free software and similar types of software?

Sometimes these sets may greatly overlap and LRS is at times just a slightly different angle of looking at the same things, but in short LRS cherry-picks the best of other things and is much greater in scope (it focuses on the big picture of whole society). I have invented LRS as my own take on suckless software and then expanded its scope to encompass not just technology but the whole society -- as I cannot speak on behalf of the whole suckless community (and sometimes disagree with them a lot), I have created my own "fork" and simply set my own definitions without worrying about misinterpreting, misquoting or contradicting someone else. LRS advocates very similar technology to that advocated by suckless, but it furthermore has its specific ideas and areas of focus. The main point is that LRS is derived from an unconditional love of all life rather than some shallow idea such as "productivity". In practice this leads to such things as a high stress put on public domain and legal safety, altruism, selflessness, anti-capitalism, accepting software such as games as desirable type of software, NOT subscribing to the productivity cult, different view on privacy, cryptocurrencies etc. While suckless is apolitical and its scope is mostly limited to software, LRS speaks not just about technology but about the whole society -- there are two main parts of LRS: less retarded software and less retarded society.

One way to see LRS is as a philosophy that takes only the good out of existing philosophies/movements/ideologies/etc. and adds them to a single unique idealist mix, without including cancer, bullshit, errors, propaganda and other negative phenomena plaguing basically all existing philosophies/movements/ideologies/etc.

Why this obsession with extreme simplicity? Is it because you're too stupid to understand complex stuff?

I used to be the mainstream, complexity embracing programmer. I am in no way saying I'm a genius but I've put a lot of energy into studying computer science full time for many years so I believe I can say I have some understanding of the "complex" stuff. I speak from own experience and also on behalf of others who shared their experience with me that the appreciation of simplicity and realization of its necessity comes after many years of dealing with the complex and deep insight into the field and into the complex connections of that field to society.

You may ask: well then but why it's just you and a few weirdos who see this, why don't most good programmers share your opinions? Because they need to make living or because they simply WANT to make a lot of money and so they do what the system wants them to do. Education in technology (and generally just being exposed to corporate propaganda since birth) is kind of a trap: it teaches you to embrace complexity and when you realize it's not a good thing, it is too late, you already need to pay your student loan, your rent, your mortgage, and the only thing they want you to do is to keep this complexity cult rolling. So people just do what they need to do and many of them just psychologically make themselves believe something they subconsciously know isn't right because that makes their everyday life easier to live. "Everyone does it so it can't be bad, better not even bother thinking about it too much". It's difficult doing something every day that you think is wrong, so you make yourself believe it's right.

It's not that we can't understand the complex. It is that the simpler things we deal with, the more powerful things we can create out of them as the overhead of the accumulated complexity isn't burdening us so much.

Simplicity is crucial not only for the quality of technology, i.e. for example its safety and efficiency, but also for its freedom. The more complex technology becomes, the fewer people can control it. If technology is to serve all people, it has to be simple enough so that as many people as possible can understand it, maintain it, fix it, customize it, improve it. It's not just about being able to understand a complex program, it's also about how much time and energy it takes because time is a price not everyone can afford, even if they have the knowledge of programming. Even if you yourself cannot program, if you are using a simple program and it breaks, you can easily find someone with a basic knowledge of programming who can fix it, unlike with a very complex program whose fix will require a corporation.

Going for the simple technology doesn't necessarily have to mean we have to give up the "nice things" such as computer games or 3D graphics. Many things, such as responsiveness and customizability of programs, would improve. Even if the results won't be so shiny, we can recreate much of what we are used to in a much simpler way. You may now ask: why don't companies do things simply if they can? Because complexity benefits them in creating de facto monopolies, as mentioned above, by reducing the number of people who can tinker with their creations. And also because capitalism pushes towards making things quickly rather than well -- and yes, even non commercial "FOSS" programs are pushed towards this, they still compete and imitate the commercial programs. Already now you can see how technology and society are intertwined in complex ways that all need to be understood before one comes to realize the necessity of simplicity.

How would your ideal society work? Isn't it utopia?

See the article on less retarded society, it contains a detailed FAQ especially on that.

Why the name "less retarded"? If you say you're serious about this, why not a more serious name?

I don't know, this is not so easy to answer because I came up with the name back when the project was smaller in scope and I didn't think about a name too hard: this name was playful, catchy, politically incorrect (keeping SJWs away) and had a kind of reference to suckless, potentially attracting attention of suckless fans. It also has the nice property of being unique, with low probability of name collision with some other existing project, as not many people will want to have the word "retarded" in the name. Overall the name captures the spirit of the philosophy and is very general, allowing it to be applied to new areas without being limited to certain means etc.

Now that the project has evolved a bit the name actually seems to have been a great choice and I'm pretty happy about it, not just for the above mentioned reasons but also because it is NOT some generic boring name that politicians, PR people and other tryhard populists would come up with. In a way it's trying to stimulate thought and make you think (if only by making you ask WHY anyone would choose such a name). Yes, in a way it's a small protest and showing we stay away from the rotten mainstream, but it's definitely NOT an attempt at catching attention at any cost or trying to look like cool rebels -- such mentality goes against our basic principles. Perhaps the greatest reasons for the name is to serve as a test -- truth should prevail no matter what name it is given and we try to test and prove this, or rather maybe prevent succeeding for wrong reasons -- we are not interested in success (which is what mere politicians do); if our ideas are to become accepted, they have to be accepted for the right reasons. And if you refuse to accept truth because you don't like its name, you are retarded and by own ignorance doom yourself to live in a shit society with shit technology.

Who writes this wiki? Can I contribute.

You can only contribute to this wiki if you're a straight white male. Just kidding, you can't contribute even if you're a straight white male.

At the moment it's just me, drummyfish. This started as a collaborative wiki name based wiki but after some disagreements I forked it (everything was practically written by me at that point) and made it my own wiki where I don't have to make any compromises or respect anyone else's opinions. I'm not opposed to the idea of collaboration but I bet we disagree on something in which case I probably don't want to let you edit this. I also resist allowing contributions because with multiple authors the chance of legal complications grows, even if the work is under a free license or waiver (refer to e.g. the situation where some Linux developers were threatening to withdraw their code contribution license). But you can totally fork this wiki, it's public domain.

If you want to contribute to the cause, just create your own website, spread the ideas you liked here -- you may or may not refer to LRS, everything's up to you. Start creating software with LRS philosophy if you can -- together we can help evolve and spread our ideas in a decentralized way, without me or anyone else being an authority, a potential censor. That's the best way forward I think.

Why is it called a wiki when it's written just by one guy? Is it to deceive people into thinking there's a whole movement rather than just one weirdo?


No, of course not you dumbo. There is no intention of deception, this project started as a collaborative wiki with multiple contributors, named Based Wiki, however I (drummyfish) forked my contributions (most of the original Wiki) into my own Wiki and renamed it to Less Retarded Wiki because I didn't like the direction of the original wiki. At that point I was still allowing and looking for more contributors, but somehow none of the original people came to contribute and meanwhile I've expanded my LRS Wiki to the point at which I decided it's simply a snapshot of my own views and so I decided to keep it my own project and kept the name that I established, the LRS Wiki. Even though at the moment it's missing the main feature of a wiki, i.e. collaboration of multiple people, it is still a project that most people would likely call a "wiki" naturally (even if only a personal one) due to having all the other features of wikis (separate articles linked via hypertext, non-linear structure etc.) and simply looking like a wiki -- nowadays there are many wikis that are mostly written by a single man (see e.g. small fandom wikis) and people still call them wikis because culturally the term has simply taken a wider meaning, people don't expect a wiki to absolutely necessarily be collaborative and so there is no deception. Additionally I am still open to the idea to possibly allowing contributions, so I'm simply keeping this a wiki, the wiki is in a sense waiting for a larger community to come. Finally the ideas I present here are not just mine but really do reflect existing movements/philosophies with significant numbers of supporters (suckless, free software, ...).

Since it is public domain, can I take this wiki and do anything with it? Even something you don't like, like sell it or rewrite it in a different way?

Yes, you can do anything... well, anything that's not otherwise illegal like falsely claiming authorship (copyright) of the original text. This is not because I care about being credited, I don't (you DON'T have to give me any credit), but because I care about this wiki not being owned by anyone. You can however claim copyright to anything you add to the wiki if you fork it, as that's your original creation.

Why not keep politics out of this Wiki and make it purely about technology?

Firstly for us technological progress is secondary to the primary type of progress in society: the social progress. The goal of our civilization is to provide good conditions for life -- this is social progress and mankind's main goal. Technological progress only serves to achieve this, so technological progress follows from the goals of social progress. So, to define technology we have to first know what it should help achieve in society. And for that we need to talk politics.

Secondly examining any existing subject in depth requires also understanding its context anyway. Politics and technology nowadays are very much intertwined and the politics of a society ultimately significantly affects what its technology looks like (capitalist SW, censorship, bloat, spyware, DRM, ...), what goals it serves (consumerism, productivity, control, war, peace, ...) and how it is developed (COCs, free software, ...), so studying technology ultimately requires understanding politics around it. I hate arguing about politics, sometimes it literally make me suicidal, but it is inevitable, we have to specify real-life goals clearly if we're to create good technology. Political goals guide us in making important design decisions about features, tradeoffs and other attributes of technology.

Of course you can fork this wiki and try to remove politics from it, but I think it won't be possible to just keep the technology part alone so that it would still make sense, most things will be left without justification and explanation.

What is the political direction of LRS then?

In three words basically anarcho pacifist communism, however the word culture may be more appropriate than "politics" here as we aim for removing traditional systems of government based on power and enforcing complex laws, there shall be no politicians in today's sense in our society. For more details see the article about LRS itself.

Why do you blame everything on capitalism when most of the issues you talk about, like propaganda, surveillance, exploitation of the poor and general abuse of power, appeared also under practically any other systems we've seen in history?

This is a good point, we talk about capitalism simply because it is the system of today's world and an immediate threat that needs to be addressed, however we always try to stress that the root issue lies deeper: it is competition that we see as causing all major evil. Competition between people is what always caused the main issues of a society, no matter whether the system at the time was called capitalism, feudalism or pseudosocialism. While historically competition and conflict between people was mostly forced by the nature, nowadays we've conquered technology to a degree at which we could practically eliminate competition, however we choose to artificially preserve it via capitalism, the glorification of competition, and we see this as an extremely wrong direction, hence we put stress on opposing capitalism, i.e. artificial prolonging of competition.

How is this different from Wikipedia?

In many ways. Our wiki is better e.g. by being more free (completely public domain, no fair use proprietary images etc.), less bloated, better accessible, not infected by pseudoleftist fascism and censorship (we only censor absolutely necessary things, e.g. copyrighted things or things that would immediately put us in jail, though we still say many things that may get us in jail), we have articles that are better readable etc.

WTF I am offended, is this a nazi site? Are you racist/Xphobic? Do you love Hitler?!?!

We're not fascists, we're in fact the exact opposite: our aim is to create technology that benefits everyone equally without any discrimination. I (drummyfish) am personally a pacifist anarchist, I love all living beings and believe in absolute social equality of all life forms. We invite and welcome everyone here, be it gays, communists, rightists, trannies, pedophiles or murderers, we love everyone equally, even you and Hitler.

Note that the fact that we love someone (e.g. Hitler) does NOT mean we embrace his ideas (e.g. Nazism) or even that we e.g. like the way he looks. You may hear us say someone is a stupid ugly fascist, but even such individuals are living beings we love.

What we do NOT engage in is political correctness, censorship, offended culture, identity politics and pseudoleftism. We do NOT support fascist groups such as feminists and LGBT and we will NOT practice bullying and codes of conducts. We do not pretend there aren't any differences between people and we will make jokes that make you feel offended.

Why do you use the nigger word so much?

To counter its censorship, we mustn't be afraid of words. The more they censor something, the more I am going to uncensor it. They have to learn that the only way to make me not say that word so often is to stop censoring it, so to their action of censorship I produce a reaction they dislike. That's basically how you train a dog. (Please don't ask who "they" are, it's pretty obvious).

It also has the nice side effect of making this less likely to be used by corporations and SJWs.

How can you say you love all living beings and use offensive language at the same time?

The culture of being offended is bullshit, it is a pseudoleftist (fascist) invention that serves as a weapon to justify censorship, canceling and bullying of people. Since I love all people, I don't support any weapons against anyone (not even against people I dislike or disagree with). People are offended by language because they're taught to be offended by it by the propaganda, I am helping them unlearn it.

But how can you so pretentiously preach "absolute love" and then say you hate capitalists, fascists, bloat etc.?

OK, firstly we do NOT love everything, we do NOT advocate against hate itself, only against hate of living beings (note we say we love everyone, not everything). Hating other things than living beings, such as some bad ideas or malicious objects, is totally acceptable, there's no problem with it. We in fact think hate of some concepts is necessary for finding better ways.

Now when it comes to "hating" people, there's an important distinction to be stressed: we never hate a living being as such, we may only hate their properties. So when we say we hate someone, it's merely a matter of language convenience -- saying we hate someone never means we hate a person as such, but only some thing about that person, for example his opinions, his work, actions, behavior or even appearance. I can hear you ask: what's the difference? The difference is we'll never try to eliminate a living being or cause it suffering because we love it, we may only try to change, in non-violent ways, their attributes we find wrong (which we hate): for example we may try to educate the person, point out errors in his arguments, give him advice, and if that doesn't work we may simply choose to avoid his presence. But we will never target hate against him.

And yeah, of course sometimes we make jokes and sarcastic comments, it is relied on your ability to recognize those yourself. We see it as retarded and a great insult to intelligence to put disclaimers on jokes, that's really the worst thing you can do to a joke.

So you really "love" everyone, even dicks like Trump, school shooters, instagram influencers etc.?

Yes, but it may need an elaboration. There are many different kinds of love: love of a sexual partner, love of a parent, love of a pet, love of a hobby, love of nature etc. Obviously we can't love everyone with the same kind of love we have e.g. for our life partner, that's impossible if we've actually never even seen most people who live on this planet. The love we are talking about -- our universal love of everyone -- is an unconditional love of life itself. Being alive is a miracle, it's beautiful, and as living beings we feel a sense of connection with all other living beings in this universe who were for some reason chosen to experience this rare miracle as well -- we know what it feels like to live and we know other living beings experience this special, mysterious privilege too, though for a limited time. This is the most basic kind of love, an empathy, the happiness of seeing someone else live. It is sacred, there's nothing more pure in this universe than feeling this empathy, it works without language, without science, without explanation. While not all living beings are capable of this love (a virus probably won't feel any empathy), we believe all humans have this love in them, even if it's being suppressed by their environment that often forces them compete, hate, even kill. Our goal is to awaken this love in everyone as we believe it's the only way to achieve a truly happy coexistence of us, living beings.

I dislike this wiki, our teacher taught us that global variables are bad and that OOP is good.

This is not a question you dummy. Have you even read the title of this page? Anyway, your teacher is stupid, he is, very likely unknowingly, just spreading the capitalist propaganda. He probably believes what he's saying but he's wrong.

Lol you've got this fact wrong and you misunderstand this and this topic, you've got bugs in code, your writing sucks etc. How dare you write about things you have no clue about?

I want a public domain encyclopedia that includes topics of new technology, and also one which doesn't literally make me want to kill myself due to inserted propaganda of evil etc. Since this supposedly modern society failed to produce even a single such encyclopedia and since every idiot on this planet wants to keep his copyright on everything he writes, I am forced to write the encyclopedia myself, even for the price of making mistakes. No, US public domain doesn't count as world wide public domain. Even without copyright there are still so called moral rights etc. Blame this society for not allowing even a tiny bit of information to slip into public domain. Writing my own encyclopedia is literally the best I can do in the situation I am in. Nothing is perfect, I still believe this can be helpful to someone. You shouldn't take facts from a random website for granted. If you wanna help me correct errors, email me.

How can you use CC0 if you, as anarchists, reject laws and intellectual property?

We use it to remove law from our project, it's kind of like using a weapon to destroy itself. Using a license such as GFDL would mean we're keeping our copyright and are willing to execute enforcement of intellectual property laws, however using a CC0 waiver means we GIVE UP all lawful exclusive rights that have been forced on us. This has no negative effects: if law applies, then we use it to remove itself, and if it doesn't, then nothing happens. To those that acknowledge the reality of the fact that adapting proprietary information can lead to being bullied by the state we give a guarantee this won't happen, and others simply don't have to care.

A simple analogy is this: a law is so fucked up nowadays that it forces us to point a gun at anyone by default when we create something. It's as if they literally put a gun in our hand and force point it at someone. We decide to drop that weapon, not merely promise to not shoot.

What software does this wiki use?

Git, the articles are written in markdown and converted to HTML with a simple script.

I don't want my name associated with this, can you remove a reference to myself or my software from your wiki?


Are you the only one in the world who is not affected by propaganda?

It definitely seems so.

How does it feel to be the only one on this planet to see the undistorted truth of reality?

Pretty lonely and depressing.

Are you a crank?

Depending on exact definition the answer is either "no" or "yes and it's a good thing".

Are you retarded?

:( Maybe, but even stupid people can sometimes have smart ideas.



Fascist groups are subgroups of society that strongly pursue self interest on the detriment of others (those who are not part of said group). Fascism is a rightist, competitive tendency; fascists aim to make themselves as strong, as powerful and as rich as possible, i.e. to weaken and possibly eliminate competing groups, to have power over them, enslave them and to seize their resources. The means of their operation are almost exclusively evil, including violence, bullying, wars, propaganda, eye for an eye, slavery etc.

A few examples of fascist groups are corporations, nations, NSDAP (Nazis), LGBT, feminists, Antifa, KKK, Marxists and, of course, the infamous Italian fascist party of Benito Mussolini.

Fascism is always bad and we have to aim towards eliminating it (that is eliminating fascism, NOT fascists -- fascists are people and living beings to whom we wish no harm). However here comes a great warning: in eliminating fascism be extremely careful to not become a fascist yourself. We purposefully do NOT advice to fight fascism as fight implies violence, the tool of fascism. Elimination of fascism has to be done in a non-violent way. Sadly, generation after generation keeps repeating the same mistake over and over: they keep opposing fascism by fascist means, eventually taking the oppressors place and becoming the new oppressor, only to again be dethroned by the new generation. This has happened e.g. with feminism and other pseudoleftist movements. This is an endless cycle of stupidity but, more importantly, endless suffering of people. This cycle needs to be ended. We must choose not the easy way of violence, but the difficult way of non-violent rejection which includes loving the enemy as we love ourselves. Fascism is all about loving one's own group while hating the enemy groups -- if we can achieve loving all groups of people, even fascists themselves, fascism will have been by definition eliminated.

Fear is the fuel of fascism. When fear of an individual reaches certain level -- which is different for everyone -- he turns to fascism. Even that who is normally anti fascist has a breaking point, under extreme pressure of fear one starts to seek purely selfish goals. This is why e.g. capitalism fuels fear culture: it makes people fascists which is a prerequisite for becoming a capitalist. When "leaders" of nations need to lead war, they start spreading propaganda of fear so as to turn people into fascists that easily become soldiers. This is why education is important in eliminating fascism: it is important to e.g. show that we need not be afraid of people of other cultures, of sharing information and resources etc. The bullshit of fear propaganda has to be exposed.



See fascism.



This article is a part of series of articles on fascism.

Feminism is a fascist terrorist pseudoleftist movement aiming for establishing female as the superior gender, for social revenge on men and gaining political power, e.g. that over language. Similarly to LGBT, feminism is hugely violent, toxic and harmful, based on brainwashing, bullying (e.g. the metoo campaign) and propaganda.

If anything's clear, then that feminism doesn't care about gender equality. Firstly it is not called gender equality movement but feminism, i.e. for-female, and as we know, name plays a huge role. Indeed, women have historically been oppressed and needed support, but once that support reaches equality -- which has basically already happened a long time ago now -- feminist movement will, if only by social inertia, keep pursuing more advantages for women (what else should a movement called feminism do?), i.e. at this point the new goal has already become female superiority. Another proof is that feminists care about things such as wage gap but of course absolutely don't give a damn about opposite direction inequality, such as men dying on average much younger than women etc. And of course, when men establish "men rights" movements to address this, suddenly feminists see those as "fascist", "toxic" and "violent" and try to destroy such movements.

Part of the success of feminism is also capitalism -- women with priviledges, e.g. those of not having to work as much as men, are not accepted under capitalism; everyone has to be exploited as much as possible, everyone has to be a work slave. Therefore capitalist propaganda promotes ideas such as "women not having to work is oppression by men and something a woman should be ashamed of", which is of course laughable, but with enough brainwashing anything can be established, even the most ridiculous and obvious bullshit.

Apparently in Korea feminists already practice segregation, they separate parking spots for men and women so as to prevent women bumping into men or meeting a man late at night because allegedly men are more aggressive and dangerous. Now this is pretty ridiculous, this is exactly the same as if they separated e.g. parking lots for black and white people because black people are statistically more aggressive and involved in crime, you wouldn't want to meet them at night. So, do we still want to pretend feminists are not fascist?



See woman.


Fight Culture

Fight culture is the harmful mindset of seeing any endeavor as a fight against something. Even such causes as aiming for establishment of peace are seen as fighting the people who are against peace, which is funny but also sad. Fight culture keeps, just by the constant repetition of the word fight, a subconscious validation of violence as justified and necessary means for achieving any goal. Fight culture is to a great degree the culture of capitalist society (of course not exclusively), the environment of extreme competition and hostility.

We, of course, see fight culture as inherently undesirable for a good society as that needs to be based on peace, love and collaboration, not competition. For this reasons we never say we "fight" anything, we rather aim for goals, look for solutions, educate and sometimes reject, refuse and oppose bad concepts (e.g. fight culture itself).



Firmware is a type of very basic software that's usually preinstalled on a device from factory and serves to provide the most essential functionality of the device. On simple devices, like mp3 players or remote controls, firmware may be all that's ever needed for the device's functioning, while on more complex ones, such as personal computers, firmware (e.g. BIOS or UEFI) allows basic configuration and installation of more complex software (such as an operating system) and possibly provides functions that the installed software can use. Firmware is normally not meant to be rewritten by the user and is installed in some kind of memory that's not very easy to rewrite, it may even be hard-wired in which case it becomes something on the very boundary of software and hardware.


Fixed Point

Fixed point arithmetic is a simple and often good enough method of computer representation of fractional numbers (i.e. numbers with higher precision than integers, e.g. 4.03), as opposed to floating point which is a more complicated way of doing this which in most cases we consider a worse, bloated alternative. Probably in 99% cases when you think you need floating point, fixed point will do just fine.

Fixed point has at least these advantages over floating point:

How It Works

Fixed point uses a fixed (hence the name) number of digits (bits in binary) for the integer part and the rest for the fractional part (whereas floating point's fractional part varies in size). I.e. we split the binary representation of the number into two parts (integer and fractional) by IMAGINING a radix point at some place in the binary representation. That's basically it. Fixed point therefore spaces numbers uniformly, as opposed to floating point whose spacing of numbers is non-uniform.

So, we can just use an integer data type as a fixed point data type, there is no need for libraries or special hardware support. We can also perform operations such as addition the same way as with integers. For example if we have a binary integer number represented as 00001001, 9 in decimal, we may say we'll be considering a radix point after let's say the sixth place, i.e. we get 000010.01 which we interpret as 2.25 (2^2 + 2^(-2)). The binary value we store in a variable is the same (as the radix point is only imagined), we only INTERPRET it differently.

We may look at it this way: we still use integers but we use them to count smaller fractions than 1. For example in a 3D game where our basic spatial unit is 1 meter our variables may rather contain the number of centimeters (however in practice we should use powers of two, so rather 1/128ths of a meter). In the example in previous paragraph we count 1/4ths (we say our scaling factor is 1/4), so actually the number represented as 00000100 is what in floating point we'd write as 1.0 (00000100 is 4 and 4 * 1/4 = 1), while 00000001 means 0.25.

This has just one consequence: we have to normalize results of multiplication and division (addition and subtraction work just as with integers, we can normally use the + and - operators). I.e. when multiplying, we have to divide the result by the inverse of the fractions we're counting, i.e. by 4 in our case (1/(1/4) = 4). Similarly when dividing, we need to MULTIPLY the result by this number. This is because we are using fractions as our units and when we multiply two numbers in those units, the units multiply as well, i.e. in our case multiplying two numbers that count 1/4ths give a result that counts 1/16ths, we need to divide this by 4 to get the number of 1/4ths back again (this works the same as e.g. units in physics, multiplying number of meters by number of meters gives meters squared.) For example the following integer multiplication:

00001000 * 00000010 = 00010000 (8 * 2 = 16)

in our system has to be normalized like this:

(000010.00 * 000000.10) / 4 = 000001.00 (2.0 * 0.5 = 1.0)

SIDE NOTE: in practice you may see division replaced by the shift operator (instead of /4 you'll see >> 2).

With this normalization we also have to think about how to bracket expressions to prevent rounding errors and overflows, for example instead of (x / y) * 4 we may want to write (x * 4) / y; imagine e.g. x being 00000010 (0.5) and y being 00000100 (1.0), the former would result in 0 (incorrect, rounding error) while the latter correctly results in 0.5. The bracketing depends on what values you expect to be in the variables so it can't really be done automatically by a compiler or library (well, it might probably be somehow handled at runtime, but of course, that will be slower). There are also ways to prevent overflows e.g. with clever bit hacks.

The normalization is basically the only thing you have to think about, apart from this everything works as with integers. Remember that this all also works with negative number in two's complement, so you can use a signed integer type without any extra trouble.

Remember to always use a power of two scaling factor -- this is crucial for performance. I.e. you want to count 1/2th, 1/4th, 1/8ths etc., but NOT 1/10ths, as might be tempting. Why are power of two good here? Because computers work in binary and so the normalization operations with powers of two (division and multiplication by the scaling factor) can easily be optimized by the compiler to a mere bit shift, an operation much faster than multiplication or division.

Code Example

For start let's compare basic arithmetic operations in C written with floating point and the same code written with fixed point. Consider the floating point code first:

  a = 21,
  b = 3.0 / 4.0,
  c = -10.0 / 3.0;
a = a * b;   // multiplication
a += c;      // addition
a /= b;      // division
a -= 10;     // subtraction
a /= 3;      // division

Equivalent code with fixed point may look as follows:

#define UNIT 1024    // our "1.0" value

  a = 21 * UNIT,
  b = (3 * UNIT) / 4,   // note the brackets, (3 / 4) * UNIT would give 0
  c = (-10 * UNIT) / 3;

a = (a * b) / UNIT;     // multiplication, we have to normalize
a += c;                 // addition, no normalization needed
a = (a * UNIT) / b;     // division, normalization needed, note the brackets
a -= 10 * UNIT;         // subtraction
a /= 3;                 // division by a number NOT in UNITs, no normalization needed
printf("%d.%d%d%d\n",   // writing a nice printing function is left as an exercise :)
  a / UNIT,
  ((a * 10) / UNIT) % 10,
  ((a * 100) / UNIT) % 10,
  ((a * 1000) / UNIT) % 10);

These examples output 2.185185 and 2.184, respectively.

Now consider another example: a simple C program using fixed point with 10 fractional bits, computing square roots of numbers from 0 to 10.

#include <stdio.h>

typedef int Fixed;

#define UNIT_FRACTIONS 1024 // 10 fractional bits, 2^10 = 1024

#define INT_TO_FIXED(x) ((x) * UNIT_FRACTIONS)

Fixed fixedSqrt(Fixed x)
  // stupid brute force square root

  int previousError = -1;
  for (int test = 0; test <= x; ++test)
    int error = x - (test * test) / UNIT_FRACTIONS;

    if (error == 0)
      return test;
    else if (error < 0)
      error *= -1;

    if (previousError > 0 && error > previousError)
      return test - 1;

    previousError = error;

  return 0;

void fixedPrint(Fixed x)
  printf("%d.%03d",x / UNIT_FRACTIONS,
    ((x % UNIT_FRACTIONS) * 1000) / UNIT_FRACTIONS);

int main(void)
  for (int i = 0; i <= 10; ++i)
    printf("%d: ",i);
  return 0;

The output is:

0: 0.000
1: 1.000
2: 1.414
3: 1.732
4: 2.000
5: 2.236
6: 2.449
7: 2.645
8: 2.828
9: 3.000
10: 3.162




#include <stdio.h>

int main(void)
  for (int i = 1; i <= 100; ++i)
    switch ((i % 3 == 0) + (i % 5 == 0) * 2)
      case 1: printf("Fizz\n"); break;
      case 2: printf("Buzz\n"); break;
      case 3: printf("FizzBuzz\n"); break;
      default: printf("%d\n",i); break;

  return 0;


Floating Point

Floating point arithmetic (normally just float) is a method of computer representation of fractional numbers and approximating real numbers, i.e. numbers with higher than integer precision (such as 5.13), which is more complex than e.g. fixed point. The core idea of it is to use a radix ("decimal") point that's not fixed but can move around so as to allow representation of both very small and very big values. Nowadays floating point is the standard way of approximating real numbers in computers (floating point types are called real in some programming languages, even though they represent only rational numbers, floats can't e.g. represent pi exactly), basically all of the popular programming languages have a floating point data type that adheres to the IEEE 754 standard, all personal computers also have the floating point hardware unit (FPU) and so it is widely used in all modern programs. However most of the time a simpler representation of fractional numbers, such as the mentioned fixed point, suffices, and weaker computers (e.g. embedded) may lack the hardware support so floating point operations are emulated in software and therefore slow -- for these reasons we consider floating point bloat and recommend the preference of fixed point.

Floating point is tricky, it works most of the time but a danger lies in programmers relying on this kind of magic too much, some new generation programmers may not even be very aware of how float works. Even though the principle is not so hard, the emergent complexity of the math is really complex. One floating point expression may evaluate differently on different systems, e.g. due to different rounding settings. One possible pitfall is working with big and small numbers at the same time -- due to differing precision at different scales small values simply get lost when mixed with big numbers and sometimes this has to be worked around with tricks (see e.g. this devlog of The Witness where a float time variable sent into shader is periodically reset so as to not grow too large and cause the mentioned issue). Another famous trickiness of float is that you shouldn't really be comparing them for equality with a normal == operator as small rounding errors may make even mathematically equal expressions unequal (i.e. you should use some range comparison instead).

And there is more: floating point behavior really depends on the language you're using (and possibly even compiler, its setting etc.) and it may not be always completely defined, leading to possible nondeterministic behavior which can cause real trouble e.g. in physics engines.

{ Really as I'm now getting down the float rabbit hole I'm seeing what a huge mess it all is, I'm not nearly an expert on this so maybe I've written some BS here, which just confirms how messy floats are. Anyway, from the articles I'm reading even being an expert on this issue doesn't seem to guarantee a complete understanding of it :) Just avoid floats if you can. ~drummyfish }

Is floating point literal evil? Well, of course not, but it is extremely overused. You may need it for precise scientific simulations, e.g. numerical integration, but as our small3dlib shows, you can comfortably do even 3D rendering without it. So always consider whether you REALLY need float. You mostly do not.

How It Works

The very basic idea is following: we have digits in memory and in addition we have a position of the radix point among these digits, i.e. both digits and position of the radix point can change. The fact that the radix point can move is reflected in the name floating point. In the end any number stored in float can be written with a finite number of digits with a radix point, e.g. 12.34. Notice that any such number can also always be written as a simple fraction of two integers (e.g. 12.34 = 1 * 10 + 2 * 1 + 3 * 1/10 + 4 * 1/100 = 617/50), i.e. any such number is always a rational number. This is why we say that floats represent fractional numbers and not true real numbers (real numbers such as pi, e or square root of 2 can only be approximated).

More precisely floats represent numbers by representing two main parts: the base -- actual encoded digits, called mantissa (or significand etc.) -- and the position of the radix point. The position of radix point is called the exponent because mathematically the floating point works similarly to the scientific notation of extreme numbers that use exponentiation. For example instead of writing 0.0000123 scientists write 123 * 10^-7 -- here 123 would be the mantissa and -7 the exponent.

Though various numeric bases can be used, in computers we normally use base 2, so let's consider it from now on. So our numbers will be of format:

mantissa * 2^exponent

Note that besides mantissa and exponent there may also be other parts, typically there is also a sign bit that says whether the number is positive or negative.

Let's now consider an extremely simple floating point format based on the above. Keep in mind this is an EXTREMELY NAIVE inefficient format that wastes values. We won't consider negative numbers. We will use 6 bits for our numbers:

So for example the binary representation 110011 stores mantissa 110 (6) and exponent 011 (3), so the number it represents is 6 * 2^3 = 48. Similarly 001101 represents 1 * 2^-3 = 1/8 = 0.125.

Note a few things: firstly our format is shit because some numbers have multiple representations, e.g. 0 can be represented as 000000, 000001, 000010, 000011 etc., in fact we have 8 zeros! That's unforgivable and formats used in practice address this (usually by prepending an implicit 1 to mantissa).

Secondly notice the non-uniform distribution of our numbers: while we have a nice resolution close to 0 (we can represent 1/16, 2/16, 3/16, ...) but low resolution in higher numbers (the highest number we can represent is 56 but the second highest is 48, we can NOT represent e.g. 50 exactly). Realize that obviously with 6 bits we can still represent only 64 numbers at most! So float is NOT a magical way to get more numbers, with integers on 6 bits we can represent numbers from 0 to 63 spaced exactly by 1 and with our floating point we can represent numbers spaced as close as 1/16th but only in the region near 0, we pay the price of having big gaps in higher numbers.

Also notice that thing like simple addition of numbers become more difficult and time consuming, you have to include conversions and rounding -- while with fixed point addition is a single machine instruction, same as integer addition, here with software implementation we might end up with dozens of instructions (specialized hardware can perform addition fast but still, not all computer have that hardware).

Rounding errors will appear and accumulate during computations: imagine the operation 48 + 1/8. Both numbers can be represented in our system but not the result (48.125). We have to round the result and end up with 48 again. Imagine you perform 64 such additions in succession (e.g. in a loop): mathematically the result should be 48 + 64 * 1/8 = 56, which is a result we can represent in our system, but we will nevertheless get the wrong result (48) due to rounding errors in each addition. So the behavior of float can be non intuitive and dangerous, at least for those who don't know how it works.

Standard Float Format: IEEE 754

IEEE 754 is THE standard that basically all computers use for floating point nowadays -- it specifies the exact representation of floating point numbers as well as rounding rules, required operations applications should implement etc. However note that the standard is kind of shitty -- even if we want to use floating point numbers there exist better ways such as posits that outperform this standard. Nevertheless IEEE 754 has been established in the industry to the point that it's unlikely to go anytime soon. So it's good to know how it works.

Numbers in this standard are signed, have positive and negative zero (oops), can represent plus and minus infinity and different NaNs (not a number). In fact there are thousands to billions of different NaNs which are basically wasted values. These inefficiencies are addressed by the mentioned posits.

Briefly the representation is following (hold on to your chair): leftmost bit is the sign bit, then exponent follows (the number of bits depends on the specific format), the rest of bits is mantissa. In mantissa implicit 1. is considered (except when exponent is all 0s), i.e. we "imagine" 1. in front of the mantissa bits but this 1 is not physically stored. Exponent is in so called biased format, i.e. we have to subtract half (rounded down) of the maximum possible value to get the real value (e.g. if we have 8 bits for exponent and the directly stored value is 120, we have to subtract 255 / 2 = 127 to get the real exponent value, in this case we get -7). However two values of exponent have special meaning; all 0s signify so called denormalized (also subnormal) number in which we consider exponent to be that which is otherwise lowest possible (e.g. -126 in case of 8 bit exponent) but we do NOT consider the implicit 1 in front of mantissa (we instead consider 0.), i.e. this allows storing zero (positive and negative) and very small numbers. All 1s in exponent signify either infinity (positive and negative) in case mantissa is all 0s, or a NaN otherwise -- considering here we have the whole mantissa plus sign bit unused, we actually have many different NaNs (WTF), but usually we only distinguish two kinds of NaNs: quiet (qNaN) and signaling (sNaN, throws and exception) that are distinguished by the leftmost bit in mantissa (1 for qNaN, 0 for sNaN).

The standard specifies many formats that are either binary or decimal and use various numbers of bits. The most relevant ones are the following:

name M bits E bits smallest and biggest number precision <= 1 up to
binary16 (half precision) 10 5 2^(-24), 65504 2048
binary32 (single precision, float) 23 8 2^(-149), 2127 * (2 - 2^-23) ~= 3 * 10^38 16777216
binary64 (double precision, double) 52 11 2^(-1074), ~10^308 9007199254740992
binary128 (quadruple precision) 112 15 2^(-16494), ~10^4932 ~10^34

Example? Let's say we have float (binary34) value 11000000111100000000000000000000: first bit (sign) is 1 so the number is negative. Then we have 8 bits of exponent: 10000001 (129) which converted from the biased format (subtracting 127) gives exponent value of 2. Then mantissa bits follow: 11100000000000000000000. As we're dealing with a normal number (exponent bits are neither all 1s nor all 0s), we have to imagine the implicit 1. in front of mantissa, i.e. our actual mantissa is 1.11100000000000000000000 = 1.875. The final number is therefore -1 * 1.875 * 2^2 = -7.5.

See Also



FLOSS (free libre and open source) is basically FOSS.



Not to be confused with any American pseudosport.

Football is one of the most famous sport games in which two teams face each other and try to score goals by kicking an inflated ball. It is one of the best sports not only because it is genuinely fun to play and watch but also because of its essentially simple rules, accessibility (not for rich only, all that's really needed is something resembling a ball) and relatively low discrimination -- basically anyone can play it, unlike for example basketball in which height is key; in amateur football even fat people can take part (they are usually assigned the role of a goalkeeper). Idiots call football soccer.

We, LRS, highly value football, as it's a very KISS sport that can be played by anyone anywhere without needing expensive equipment. It is the sport of the people, very popular in poor parts of the world.

Football can be implemented as a video game or inspire a game mode -- this has been done e.g. in Xonotic (the Nexball mode) or SuperTuxKart.


As football is so widely played on all levels and all around the world, there are many versions and rule sets of different games in the football family, and it can sometimes be difficult to even say what classifies as football and what's a different sport. There are games like futsal and beach football that may or may not be seen as a different sport. The most official rules of what we'd call football are probably those known as Laws of the Game governed by International Football Association Board (IFAB) -- these rules are used e.g. by FIFA, various national competitions etc. Some organizations, e.g. some in the US, use different but usually similar rules. We needn't say these high level rules are pretty complex -- Laws of the Game have over 200 pages and talk not just about the mechanics of the game but also things such as allowed advertising, political and religious symbolism, referee behavior etc.

Here is a simple ASCII rendering of the football pitch:

   |                    :                    |
   |                    :                    |
   |........            :            ........|
   |       :            :            :       |
   |....   :            :            :   ....|
 __|   :   :            :            :   :   |__
|G :   :   :            :            :   :   : G|
|1 :   :   :            O            :   :   : 2|
|__:   :   :            :            :   :   :__|
   |...:   :            :            :   :...|
   |       :            :            :       |
   |.......:            :            :.......|
   |                    :                    |
   |                    :                    |
  C3                                         C4

In amateur games simpler rules are used -- a sum up of such rules follows:



Fork is a branch that splits from the main branch of a project and continues to develop in a different direction as a separate version of that project, possibly becoming a completely new one. This may happen with any "intellectual work" or idea such as software, movement, theory, literary universe, religion or, for example, a database. Forks may later be merged back into the original project or continue and diverge far away, forks of different projects may also combine into a single project as well.

For example the Android operating system and Linux-libre kernel have both been forked from Linux. Linux distributions highly utilize forking, e.g. Devuan or Ubuntu and Mint are forked from Debian. Free software movement was forked into open source, free culture and suckless, and suckless was more or less forked into LRS. Wikipedia also has forks such as Metapedia. Memes evolve a lot on the basis of forking.

Forking takes advantage of the ability to freely duplicate information, i.e. if someone sees how to improve an intellectual work or use it in a novel way, he may simply copy it and start developing it in a new diverging direction while the original continues to exist and going its own way. That is unless copying and modification of information is artificially prevented, e.g. by intellectual property laws or purposeful obscurity standing in the way of remixing. For this reason forking is very popular in free culture and free software where it is allowed both legally and practically -- in fact it plays a very important role there.

In software development temporary forking is used for implementing individual features which, when completed, are merged back into the main branch. This is called branching and is supported by version control systems such as git.

There are two main kinds of forks:

Is forking good? Yes, to create anything new it is basically necessary to build on top of someone else's work, stand on someone else's shoulders. Some people criticize too much forking; for example some cry about Linux distro fragmentation, they say there are too many of distros and that people should rather focus their energy on creating a single or at least fewer good operating systems, i.e. that forking is kind of "wasting effort". LRS supports any kind of wild forking and experimentation, we believe the exploration of many directions to be necessary in order to find the right one, in a good society waste of work won't be happening -- that's an issue of a competitive society, not forking.

In fact we think that (at least soft) forking should be incorporated on a much more basic level, in the way that the suckless community popularized. In suckless everyone's copy of software is a personal fork, i.e. software is distributed in source form and is so extremely easy to compile and modify that every user is supposed to do this as part of the installation process (even if he isn't a programmer). Before compilation user applies his own selected patches, custom changes and specific configuration (which is done in the source code itself) that are unique to that user and which form source code that is the user's personal fork. Some of these personal forks may even become popular and copied by other users, leading to further development of these forks and possible natural rise of very different software. This should lead to natural selection, survival and development of the good and useful forks.


Formal Language

The field of formal languages tries to mathematically and rigorously view problems as languages; this includes probably most structures we can think of, from human languages and computer languages to visual patterns and other highly abstract structures. Formal languages are at the root of theoretical computer science and are important e.g. for the theory of computability/decidability, computational complexity, security and compilers, but they also find use in linguistics and other fields of science.

A formal language is defined as a (potentially infinite) set of strings (which are finite but unlimited in length) over some alphabet (which is finite). I.e. a language is a subset of E* where E is a finite alphabet (a set of letters). (* is a Kleene Star and signifies a set of all possible strings over E). The string belonging to a language may be referred to as a word or perhaps even sentence, but this word/sentence is actually a whole kind of text written in the language, if we think of it in terms of our natural languages. The C programming language can be seen as a formal language which is a set of all strings that are a valid C program that compiles without errors etc.

For example, given an alphabet [a,b,c], a possible formal language over it is [a,ab,bc,c]. Another, different possible language over this alphabet is an infinite language [b,ab,aab,aaab,aaaab,...] which we can also write with a regular expression as a*b. We can also see e.g. English as being a formal language equivalent to a set of all texts over the English alphabet (along with symbols like space, dot, comma etc.) that we would consider to be in English as we speak it.

What is this all good for? This mathematical formalization allows us to classify languages and understand their structure, which is necessary e.g. for creating efficient compilers, but also to understand computers as such, their power and limits, as computers can be viewed as machines for processing formal languages. With these tools researches are able to come up with proofs of different properties of languages, which we can exploit. For example, within formal languages, it has been proven that certain languages are uncomputable, i.e. there are some problems which a computer cannot ever solve (typical example is the halting problem) and so we don't have to waste time on trying to create such algorithms as we will never find any. The knowledge of formal languages can also guide us in designing computer languages: e.g. we know that regular languages are extremely simple to implement and so, if we can, we should prefer our languages to be regular.


We usually classify formal languages according to the Chomsky hierarchy, by their computational "difficulty". Each level of the hierarchy has associated models of computation (grammars, automatons, ...) that are able to compute all languages of that level (remember that a level of the hierarchy is a superset of the levels below it and so also includes all the "simpler" languages). The hierarchy is more or less as follows:

Note that here we are basically always examining infinite languages as finite languages are trivial. If a language is finite (i.e. the set of all strings of the language is finite), it can automatically be computed by any type 3 computational model. In real life computers are actually always equivalent to a finite state automaton, i.e. the weakest computational type (because a computer memory is always finite and so there is always a finite number of states a computer can be in). However this doesn't mean there is no point in studying infinite languages, of course, as we're still interested in the structure, computational methods and approximating the infinite models of computation.

NOTE: When trying to classify a programming language, we have to be careful about what we classify: one thing is what a program written in given language can compute, and another thing is the language's syntax. To the former all strict general-purpose programming languages such as C or JavaScript are type 0 (Turing complete). From the syntax point of view it's a bit more complicated and we need to further define what exactly a syntax is (where is the line between syntax and semantic errors): it may be (and often is) that syntactically the class will be lower. There is actually a famous meme about Perl syntax being undecidable.



Forth is a based minimalist stack-based untyped programming language with postfix (reverse Polish) notation.

{ It's kinda like usable brainfuck. ~drummyfish }

It is usually presented as interpreted language but may as well be compiled, in fact it maps pretty nicely to assembly.

There are several Forth standard, most notably ANSI Forth from 1994.

A free interpreter is e.g. GNU Forth (gforth).


The language is case-insensitive.

The language operates on an evaluation stack: e.g. the operation + takes the two values at the top of the stack, adds them together and pushed the result back to the stack. Besides this there are also some "advanced" features like variables living outside the stack, if you want to use them.

The stack is composed of cells: the size and internal representation of the cell is implementation defined. There are no data types, or rather everything is just of type signed int.

Basic abstraction of Forth is so called word: a word is simply a string without spaces like abc or 1mm#3. A word represents some operation on stack (and possible other effect such as printing to the console), for example the word 1 adds the number 1 on top of the stack, the word + performs the addition on top of the stack etc. The programmer can define his own words which can be seen as "functions" or rather procedures or macros (words don't return anything or take any arguments, they all just invoke some operations on the stack). A word is defined like this:

: myword operation1 operation2 ... ;

For example a word that computes and average of the two values on top of the stack can be defined as:

: average + 2 / ;

Built-in words include:


+           add                 a b -> (a + b)
-           subtract            a b -> (b - a)
*           multiply            a b -> (a * b)
/           divide              a b -> (b / a)
=           equals              a b -> (-1 if a = b else 0)
<           less than           a b -> (-1 if a < b else 0)
>           greater than        a b -> (-1 if a > b else 0)
mod         modulo              a b -> (b % a)
dup         duplicate             a -> a a
drop        pop stack top         a ->
swap        swap items          a b -> b a
rot         rotate 3          a b c -> b c a
.           print top & pop
key         read char on top
.s          print stack
emit        print char & pop
cr          print newline
cells       times cell width      a -> (a * cell width in bytes)
depth       pop all & get d.  a ... -> (previous stack size)
bye         quit


variable X      creates var named X (X is a word that pushed its addr)
N X !           stores value N to variable X
N X +!          adds value N to variable X
X @             pushes value of variable X to stack
N constant C    creates constant C with value N
C               pushes the value of constant C


( )                   comment (inline)
\                     comment (until newline)
." S "                print string S
X if C then           if X, execute C // only in word def.
X if C1 else C2 then  if X, execute C1 else C2 // only in word def.
do C loop             loops from stack top value to stack second from,
                      top, special word "i" will hold the iteration val.
begin C until         like do/loop but keeps looping as long as top = 0
begin C while         like begin/until but loops as long as top != 0
allot                 allocates memory, can be used for arrays

example programs:

100 1 2 + 7 * / . \ computes and prints 100 / ((1 + 2) * 7)
cr ." hey bitch " cr \ prints: hey bitch
: myloop 5 0 do i . loop ; myloop \ prints 0 1 2 3 4



FOSS (Free and Open Source Software, sometimes also FLOSS, adding Libre), is a kind of neutral term for software that is both free as in freedom and open source. It's just another term for this kind of software, as if there weren't enough of them :) People normally use this to stay neutral, to appeal to both free and open source camps or if they simply need a short term not requiring much typing.


Frequently Questioned Answers

TODO: figure out what to write here



Informally speaking fractal is a shape that's geometrically "infinitely complex" while being described in an extremely simple way, e.g. with a very simple formula or algorithm. Shapes found in the nature, such as trees, mountains or clouds, are often fractals. Fractals show self-similarity, i.e. when "zooming" into an ideal fractal we keep seeing it is composed, down to an infinitely small scale, of shapes that are similar to the shape of the whole fractal; e.g. the branches of a tree look like smaller versions of the whole tree etc.

Fractals are the beauty of mathematics, they can impress even complete non-mathematicians and so are probably good as a motivational example in math education.

Fractal is formed by iteratively or recursively (repeatedly) applying its defining rule -- once we repeat the rule infinitely many times, we've got a perfect fractal. In the real world, of course, both in nature and in computing, the rule is just repeat many times as we can't repeat literally infinitely. The following is an example of how iteration of a rule creates a simple tree fractal; the rule being: from each branch grow two smaller branches.

                                                    V   V V   V
                                \ /   \ /         V  \ /   \ /  V
               |     |      _|   |     |   |_   >_|   |     |   |_<
            '-.|     |.-'     '-.|     |.-'        '-.|     |.-'
   \   /        \   /             \   /                \   /
    \ /          \ /               \ /                  \ /
     |            |                 |                    |
     |            |                 |                    |
     |            |                 |                    |

iteration 0  iteration 1       iteration 2          iteration 3

Mathematically fractal is a shape whose Hausdorff dimension (the "scaling factor of the shape's mass") is non-integer. For example the Sierpinski triangle can normally be seen as a 1D or 2D shape, but its Hausdorff dimension is approx. 1.585 as if we scale it down twice, it decreases its "weight" three times (it becomes one of the three parts it is composed of); Hausdorff dimension is then calculated as log(3)/log(2) ~= 1.585.

L-systems are one possible way of creating fractals. They describe rules in form of a formal grammar which is used to generate a string of symbols that are subsequently interpreted as drawing commands (e.g. with turtle graphics) that render the fractal. The above shown tree can be described by an L-system. Among similar famous fractals are the Koch snowflake and Sierpinski Triangle.

            /\  /\
          /\      /\
         /\/\    /\/\
        /\  /\  /\  /\
     Sierpinski Triangle

Fractals don't have to be deterministic, sometimes there can be randomness in the rules which will make the shape be not perfectly self-similar (e.g. in the above shown tree fractal we might modify the rule to from each branch grow 2 or 3 new branches).

Another way of describing fractals is by iterative mathematical formulas that work with points in space. One of the most famous fractals formed this way is the Mandelbrot set. It is the set of complex numbers c such that the series z_next = (z_previous)^2 + c, z0 = 0 does not diverge to infinity. Mendelbrot set can nicely be rendered by assigning each iteration's result a different color; this produces a nice colorful fractal. Julia sets are very similar and there is infinitely many of them (each Julia set is formed like the Mandelbrot set but c is fixed for the specific set and z0 is the tested point in the complex plain).

Fractals can of course also exist in 3 and more dimensions so we can have also have animated 3D fractals etc.

Fractals In Tech

Computers are good for exploring and rendering fractals as they can repeat given rule millions of times in a very short time. Programming fractals is quite easy thanks to their simple rules, yet this can highly impress noobs.

However, as shown by Code Parade (https://yewtu.be/watch?v=Pv26QAOcb6Q), complex fractals could be rendered even before the computer era using just a projector and camera that feeds back the picture to the camera. This is pretty neat, though it seems no one actually did it back then.

3D fractals can be rendered with ray marching and so called distance estimation. This works similarly to classic ray tracing but the rays are traced iteratively: we step along the ray and at each step use an estimate of the current point to the surface of the fractal; once we are "close enough" (below some specified threshold), we declare a hit and proceed as in normal ray tracing (we can render shadows, apply materials etc.). The distance estimate is done by some clever math.

Mandelbulber is a free, advanced software for exploring and rendering 3D fractals using the mentioned method.

Marble Racer is a FOSS game in which the player races a glass ball through levels that are animated 3D fractals. It also uses the distance estimation method implemented as a GPU shader and runs in real-time.

Fractals are also immensely useful in procedural generation, they can help generate complex art much faster than human artists, and such art can only take a very small amount of storage.

There also exist such things as fractal antennas and fractal transistors.


Frameless Rendering

Frameless rendering is a technique of rendering animation by continuously updating an image on the screen by updating single "randomly" selected pixels rather than by showing a quick sequence of discrete frames. This is an alternative to the mainstream double buffered frame-based rendering traditionally used nowadays.

Typically this is done with image order rendering methods, i.e. methods that can immediately and independently compute the final color of any pixel on the screen -- for example with raytracing.

The main advantage of frameless rendering is of course saving a huge amount of memory usually needed for double buffering, and usually also increased performance (fewer pixels are processed per second). The animation may also seem more smooth and responsive -- reaction to input is seen faster. Another advantage, and possibly a disadvantage as well, is a motion blur effect that arises as a side effect of updating by individual pixels spread over the screen: some pixels show the scene at a newer time than others, so the previous images kind of blend with the newer ones. This may add realism and also prevent temporal aliasing, but blur may sometimes be undesirable, and also the kind of blur we get is "pixelated" and noisy.

Selecting the pixels to update can be done in many ways, usually with some pseudorandom selection (jittered sampling, Halton sequence, Poisson Disk sampling, ...), but regular patterns may also be used. There have been papers that implemented adaptive frameless rendering that detected where it is best to update pixels to achieve low noise.

Historically similar (though different) techniques were used on computers that didn't have enough memory for a double buffer or redrawing the whole screen each frame was too intensive on the CPU; programmers had to identify which pixels had to be redrawn and only update those. This resulted in techniques like adaptive tile refresh used in scrolling games such as Commander Keen.



Software framework is a collection of tools such as environments, libraries, compilers and editors, that together allow fast and comfortable implementation of other software by plugging in relatively small pieces of code. While a simple library is something that's plugged as a helper into programmer's code, framework is a bigger system into which programmer plugs his code. Frameworks are generally bloated and harmful, LRS doesn't recommend relying on them.


Free Culture

Free (as in freedom) culture is a movement aiming for the relaxation of intellectual property restrictions, mainly that of copyright, to allow free usage, reusing and sharing of artworks and other kind of information. Free culture argues that our society has gone too far in forcefully restricting the natural freedom of information by very strict laws (e.g. by authors holding copyright even 100 years after their death) and that we're hurting art, creativity, education and progress by continuing to strengthen restrictions on using, modifying (remixing) and sharing things like books, music and scientific papers. The word "free" in free culture refers to freedom, not just price -- free cultural works have to be more than just available gratis, they must also give its users some specific legal rights. Nevertheless free culture itself isn't against commercialization of art, it just argues for rather doing so by other means than selling legal rights to it. The opposite of free culture is permission culture (culture requiring permission for reuse of intellectual works).

The promoters of free culture want to relax intellectual property laws (copyright, patents, trademarks etc.) but also promote an ethic of sharing and remixing being good (as opposed to the demonizing anti-"piracy" propaganda of today), they sometimes mark their works with words "some rights reserved" or even "no rights reserved", as opposed to the traditional "all rights reserved".

Free culture is kind of a younger sister movement to the free software movement, in fact it has been inspired by it (we could call it its fork). While free software movement, established in 1983, was only concerned with freedoms relating to computer program source code, free culture later (around 2000) took its ideas and extended them to all information including e.g. artworks and scientific data. There are clearly defined criteria for a work to be considered free (as in freedom) work, i.e. part of the body of free cultural works. The criteria are very similar to those of free software (the definition is at https://freedomdefined.org/Definition) and can be summed up as follows:

A free cultural work must allow anyone to (legally and practically):

  1. Use it in any way and for any purpose, even commercially.
  2. Study it.
  3. Share it, i.e. redistribute copies, even commercially.
  4. Modify it and redistribute the modified copies, even commercially.

Some of these conditions may e.g. further require a source code of the work to be made available (e.g. sheet music, to allow studying and modification). Some conditions may however still be imposed, as long as they don't violate the above -- e.g. if a work allows all the above but requires crediting the author, it is still considered free (as in freedom). Copyleft (also share-alike, requirement of keeping the license for derivative works) is another condition that may be required. This means that many (probably most) free culture promoters actually rely and even support the concept of e.g. copyright, they just want to make it much less strict.

It was in 2001 when Lawrence Lessig, an American lawyer who can be seen as the movement's founder, created the Creative Commons, a non-profit organization which stands among the foundations of the movement and is very much connected to it. By this time he was already educating people about the twisted intellectual property laws and had a few followers. Creative Commons would create and publish a set of licenses that anyone could use to release their works under much less restrictive conditions than those that lawfully arise by default. For example if someone creates a song and releases it under the CC-BY license, he allows anyone to freely use, modify and share the song as long as proper attribution is given to him. It has to be noted that NOT all Creative Commons licenses are free culture (those with NC and ND conditions break the above given rules)! It is also possible to use other, non Creative Commons licenses in free culture, as long as the above given criteria are respected.

In 2004 Lessig published his book called Free Culture that summarized the topic as well as proposed solutions -- the book itself is shared under a Creative Commons license and can be downloaded for free (however the license is among the non-free CC licenses so the book itself is not part of free culture lmao, big fail by Lessig).

{ I'd recommend reading the Free Culture book to anyone whose interests lie close to free culture/software, it's definitely one of the essential works. ~drummyfish }

In the book Lessig gives an overview of the history of copyright -- it has been around since about the time of invention of printing press to give some publishers exclusive rights (an artificial monopoly) for printing and publishing certain books. The laws evolved but at first were not so restrictive, they only applied to very specific uses (printing) and for limited time, plus the copyright had to be registered. Over time corporations pressured to make it more and more restrictive -- nowadays copyright applies to basically everything and lasts for 70 years AFTER the death of the author (!!!). This is combined with the fact that in the age of computers any use of information requires making a copy (to read something you need to download it), i.e. copyright basically applies to ANY use now. I.e. both scope and term of copyright have been extended to the extreme, and this was done even AGAINST the US constitution -- Lessig himself tried to fight against it in court but lost. This form of copyright now restricts culture and basically only serves corporations who want to e.g. kill the public domain (works that run out of copyright and are now "free for everyone") by repeatedly prolonging the copyright term so that people don't have any pool of free works that would compete (and often win simply by being gratis) with the corporate created "content". In the books Lessig also mentions many hard punishments for breaking copyright laws and a lot of other examples of corruption of the system. He then goes on to propose solutions, mainly his Creative Commons licenses.

Free culture has become a relative success, the free Creative Commons licenses are now widely used -- e.g. Wikipedia is part of free culture under the CC-BY-SA license and its sister project Wikimedia Commons hosts over 80 million free cultural works! There are famous promoters of free culture such as Nina Paley, webcomics, books, songs etc. In development of libre games free cultural licenses are used (alongside free software licenses) to liberate the game assets -- e.g. the Freedoom project creates free culture content replacement for the game Doom. There are whole communities such as opengameart or Blendswap for sharing free art, even sites with completely public domain stock photos, vector images, music and many other things. Many scientists release their data to public domain under CC0. And of course, LRS highly advocated free culture, specifically public domain under CC0.

BEWARE of fake free culture: there are many resources that look like or even call themselves "free culture" despite not adhering to its rules. This may be by intention or not, some people just don't know too much about the topic -- a common mistake is to think that all Creative Commons licenses are free culture -- again, this is NOT the case (the NC and ND ones are not). Some think that "free" just means "gratis" -- this is not the case (free means freedom, i.e. respecting the above mentioned criteria of free cultural works). Many people don't know the rules of copyright and think that they can e.g. create a remix of some non-free pop song and license it under CC-BY-SA -- they CANNOT, they are making a derivative work of a non-free work and so cannot license it. Some people use licenses without knowing what they mean, e.g. many use CC0 and then ask for their work to not be used commercially -- this can't be done, CC0 specifically allows any commercial use. Some try to make their own "licenses" by e.g. stating "do whatever you want with my work" instead of using a proper waiver like CC0 -- this is with high probability legally unsafe and invalid, it is unfortunately not so easy to waive one's copyright -- DO use the existing licenses. Educate yourself and if you're unsure, ask away in the community, people are glad to give advice.

See Also


Free/Freedom-Friendly Hardware

Free (as in freedom) hardware is a form of ethical hardware aligned with the philosophy of free (as in freedom) software, i.e. having a free licensed designed that allows anyone to study, use, modify and share such designs for any purpose and so prevent abuse of users by technology. Let us note the word free refers to user freedom, not price! Sometimes the term may be more broadly and not completely correctly used even for hardware that's just highly compatible with purely free software systems -- let us rather call these a freedom friendly hardware -- and sometimes people misunderstand the term free as meaning "gratis hardware"; to avoid misunderstandings GNU recommends using the term free design hardware or libre hardware for free hardware in the strict sense, i.e. hardware with free licensed design. Sometimes -- nowadays maybe even more often -- the term "open source" hardware or open hardware with very similar meaning is encountered, but that is of course a harmful terminology as open source is an inherently harmful capitalist movement ignoring the ethical question of freedom -- hence it is recommended to prefer using the term free hardware. Sometimes the acronym FOSH (free and open source hardware) is used neutrally, similarly to FOSS.

GNU, just like us, highly advocates for free hardware, though, unlike with software, they don't completely reject using non-free hardware nowadays, not just for practical reasons (purely free hardware almost doesn't exist), but also because hardware is fundamentally different from software and it is possible to use some non-free hardware (usually the older one) relatively safely, without sacrificing freedom. The FSF issues so called Respects Your Freedom (RYF) certification for non-malicious hardware products, both free and non-free, that can be used with 100% free software (even though RYF has also been a target of some criticism of free software activists).

We, LRS, advocate for more strict criteria than just a free-licensed hardware design, for example we prefer complete public domain and advocate high simplicity which is a prerequisite of true freedom -- see less retarded hardware for more.

The topic of free hardware is a bit messy, free hardware definition is not as straightforward as that of free software because hardware, a physical thing, has some inherently different properties than software and it is also not as easy to design and create so it evolves more slowly than software. For example the very question of what even is hardware? There is a grey area between hardware and software, sometimes we see firmware as hardware, sometimes as software, sometimes pure software can be hardwired into a circuit so it basically behaves like hardware etc. Hardware design also has different levels, a higher level design may be free-licensed but its physical implementation may require existing lower level components that are non-free -- does such hardware count as free or not? We have to keep these things in mind. While in the software world it is usually quite easy to label a piece of software as free or not, with hardware we rather tend to speak of different levels of freedom, at least for now.

Existing Free And Freedom-Friendly Hardware And Firmware

{ I'm not so much into hardware, this may be incomplete or have some huge errors, as always double check and please forgive :) Report any errors you find, thanks. ~drummyfish }


The following is a list of hardware whose design is at least to some degree free/open (i.e. for example free designs that however may be using a non-free CPU, this is an issue discussed above):

The following is a list of some "freedom friendly" hardware, i.e. hardware that though partly or fully proprietary is not or can be made non-malicious to the user (has documented behavior, allows fully free software, battery replacement, repairs etc.):

The following is a list of firmware, operating systems and software tools that can be used to liberate freedom-friendly proprietary devices:

See Also



In our community, as well as in the wider tech and some non-tech communities, the word free is normally used in the sense of free as in freedom, i.e. implying freedom, not price. The word for "free of cost" is gratis (also free as in beer). To prevent this confusion the word libre is sometimes used in place of free, or we say free as in freedom, free as in speech etc.


Free Software

Not to be confused with open $ource.

Free (as in freedom) software is a type of ethical software that's respecting its users' freedom and preventing their abuse, generally by availability of its source code AND by a license that allows anyone to use, study, modify and share the software. Free software is NOT equal to software whose source code is available or software that is offered for zero price, the basic rights to the software are the key attribute that has to be present. Free software stands opposed to proprietary software -- the kind of abusive, closed software that capitalism produces by default. Free software is not to be confused with freeware ("gratis", software available for free); although free software is always available for free thanks to its definition, zero price is not its goal. The goal is freedom.

Free software is also known as free as in freedom, free as in speech software or libre software. It is sometimes equated with open source, even though open source is fundamentally different (evil), or neutrally labelled FOSS or FLOSS (free/libre and open-source software). Software that is gratis (freeware) is sometimes called free as in beer.

Examples of free software include the GNU operating system (also known as "Linux"), GIMP (image editor), Stockfish chess engine, or games such as Xonotic and Anarch. Free software is actually what runs the world, it is a standard among experts and it is possible to do computing with exclusively free software, even though most normal people don't even know the term free software exists because they only ever come in contact with abusive proprietary consumer software such as Windows and capitalist games. There also exists a lot of big and successful software, such as Fireforx, Linux (the kernel) or Blender, that's often spoken of as free software which may however be only technically true or true only to a big (but not full) degree: for example even though Linux is 99% free, in its vanilla version it comes with proprietary binary blobs which breaks the rules of free software. Blender is technically free but it is also capitalist software which doesn't really care about freedom and may de-facto limit some freedoms required by free software, even if they are granted legally by Blender's license. Such software is better called "open source" or FOSS because it doesn't meet the high standards of free software.

Though unknown to common people, the invention and adoption of free software has been one the most important events in the history of computers -- mere technology consumers nowadays don't even realize (and aren't told) that what they're using consists and has been enabled possibly mostly by software written non-commercially, by volunteers for free, basically on communist principles. Even if consumer technology is unethical because the underlying free technology has been modified by corporations to abuse the users, without free software the situation would have been incomparably worse if Richard Stallman hadn't achieved the small miracle of establishing free software. Without it there would probably be practically no alternative to abusive technology nowadays, everything would be much more closed, there would probably be no "open source", "open hardware" such as Arduino and things such as Wikipedia. If the danger of intellectual property in software wasn't foreseen and countered by Richard Stallman, the corporations' push of legislation would probably have continued and copyright laws might have been many times worse today, to the point of not even being able to legally write free software nowadays. We have to be very grateful that this happened and continue to support free software.

Richard Stallman, the inventor of the concept and the term "free software", says free software is about ensuring the freedom of computer users, i.e. people truly owning their tools -- he points out that unless people have complete control over their tools, they don't truly own them and will instead become controlled and abused by the makers (true owners) of those tools, which in capitalism are corporations. Richard Stallman stressed that there is no such thing as partially free software -- it takes only a single line of code to take away the user's freedom and therefore if software is to be free, it has to be free as a whole. This is in direct contrast with open source (a term discourages by Stallman himself) which happily tolerates for example Windows only programs and accepts them as "open source", even though such a program cannot be run without the underlying proprietary code of the platform. It is therefore important to support free software rather than the business spoiled open source.

Free software is not about privacy! That is a retarded simplification spread by cryptofascists. Free software, as its name suggests, is about freedom in wide sense, which of course does include the freedom to stay anonymous, but there are many more freedoms which free software stands for, e.g. the freedom of customization of one's tools or the general freedom of art -- being able to utilize or remix someone else's creation for creating something new or better. Software focused on privacy is called simply privacy respecting software.

Is free software communism? This is a question often debated by Americans who have a panic phobia of anything resembling ideas of sharing and giving away for free. The answer is: yes and no. No as in it's not Marxism, the kind of evil pseudocommunism that plagued the world not a long time long ago -- that was a hugely complex, twisted violent ideology encompassing whole society which furthermore betrayed many basic ideas of equality and so on. Compared to this free software is just a simple idea of not applying intellectual property to software, and this idea may well function under some form of early capitalism. But on the other hand yes, free software is communism in its general form that simply states that sharing is good, it is communism as much e.g. teaching a kid to share toys with its siblings.


Free software was originally defined by Richard Stallman for his GNU project. The definition was subsequently adopted and adjusted by other groups such as Debian and so nowadays there isn't just one definition, even though the GNU definition is usually implicitly supposed. However, all of these definition are very similar and are basically variations and subsets of the original one. The GNU definition of free software is paraphrased as follows:

Software is considered free if all its users have the legal and de facto rights to:

  1. Use the software for any purpose (even commercial or that somehow deemed unethical by someone).
  2. Study the software. For this source code of the program has to be available.
  3. Share the software with anyone.
  4. Modify the software. For this source code of the program has to be available. This modified version can also be shared with anyone.

Note that as free software cares about real freedom, the word "right" here is seen as meaning a de facto right, i.e. NOT just a legal right -- legal rights (a free license) are required but if there appears a non-legal obstacle to those freedoms, free software communities will address them. Again, open source differs here by just focusing on legality.

To make it clear, freedom 0 (use for any purpose) covers ANY use, even commercial use or use deemed unethical by society of the software creator. Some people try to restrict this freedom, e.g. by prohibiting use for military purposes or prohibiting use by "fascists", which makes the software NOT free anymore. NEVER DO THIS. The reasoning behind freedom 0 is the same as that behind free speech: allowing any use doesn't imply endorsing or supporting any use, it simply means that we refuse to engage in certain kinds of oppression our of principle. Trying to mess with freedom 0 would be similar to e.g. prohibiting science on the ground of the fact that scientific results can be used in unethical ways -- we simply don't do this. We try to prevent unethical behavior in other ways than prohibiting basic rights.

Source code here means the preferred form in which software is modified, i.e. things such as obfuscated source code don't count as true source code.

The developers of Debian operating system have created their own guidelines (Debian Free Software Guidelines) which respect these points but are worded in more complex terms and further require e.g. non-functional data to be available under free terms as well (source) which GNU doesn't (source). The definition of open source is yet more complex even though in practice legally free software is eventually also open source and vice versa.


Free software was invented by Richard Stallman in the 1980s. His free software movement inspired later movements such as the free culture movement and the evil open-source movement.

See Also


Free Speech

Freedom of speech means there are no arbitrary government or anyone else imposed punishments for or obstacles (such as censorship) to merely talking about anything, making any public statement or publication of any information. Free speech has to be by definition absolute and have no limit, otherwise it's not free speech but controlled speech -- trying to add exceptions to free speech is like trying to limit to whom a free software license is granted; doing so immediately makes such software non-free. Freedom of speech is an essential attribute of a mature society, sadly it hasn't been widely implemented yet and with the SJW cancer the latest trend in society is towards eliminating free speech rather than supporting it (see e.g. political correctness). Speech is being widely censored by extremist groups (e.g. LGBT and corporations, see also cancel culture) and states -- depending on country there exist laws against so called "hate speech", questioning official versions of history (see e.g. Holocaust denial laws present in many EU states), criticizing powerful people (for example it is illegal to criticize or insult that huge inbred dick Thai king), sharing of useful information such as books (copyright censorship) etc. Free speech nowadays is being eliminated by the strategy of creating an exception to free speech, usually called "hate speech", and then classifying any undesired speech under such label and silencing it.

The basic principle of free speech says that if you don't support freedom of speech which you dislike, you don't support free speech. I.e. speech that you hate does not equal hate speech.

Some idiots (like that xkcd #1357) say that free speech is only about legality, i.e. about what's merely allowed to be said by the law or what speech the law "protects". Of course, this is completely wrong and just reflects this society's obsession with law; true free speech mustn't be limited by anything -- if you're not allowed to say something, it doesn't matter too much what it is that's preventing you, your speech is not free. If for example it is theoretically legal to be politically incorrect and criticize the LGBT gospel but you de-facto can't do it because the LGBT fascist SJWs would cancel you and maybe even physically lynch you, your speech is not free. It is important to realize we mustn't tie free speech to legal definition, i.e. it isn't enough to make speech free only in legal sense -- keep in mind that a good society aims to eliminating law itself. Our goal is to make speech free culturally, i.e. teach people that we should let others speak freely, even those -- and especially those -- who we disagree with.

Despite what the propaganda says there is no free speech in our society, the only kind of speech that is allowed is that which either has no effect or which the system desires for its benefit. Illusion of free speech is sustained by letting people speak until they actually start making a change -- once someone's speech leads to e.g. revealing state secrets or historical truths (e.g. about Holocaust, human races or government crimes -- see wikileaks) or to destabilizing economy or state, such speech is labeled "harmful" in some way (hate speech, intellectual property violation, revealing of confidential information, instigating crime, defamation etc.), censored and punished. Even though nowadays just pure censorship laws are being passed on daily basis, even in times when there are seemingly no specific censorship laws and so it seems that "we have free speech" there always exist generic laws that can be fit to any speech, such as those against "inciting violence", "terrorism", "undermining state interests", "hate speech" or any other fancy issue, which can be used to censor absolutely any speech the government pleases, even if such speech has nothing to do with said causes -- it is enough that some state lawyer can find however unlikely possible indirect link to such cause: this could of course be well seen e.g. in the cases of Covid flu or Russia-Ukraine war. Even though there were e.g. no specific laws in European countries against supporting Russia immediately after the war started, government immediately started censoring and locking up people who supported Russia on the Internet, based on the above mentioned generic laws. These laws work on the same principle as backdoor in software: they are advocated as a "safety" "feature" and allow complete takeover of the system, but are mostly unused until the right time comes, to give the users a sense of being safe ("I've been using this backdoored CPU for years and nothing happened, so it's safe"); unlike with software backdoor though the law backdoor isn't usually removed after it has been exploited, people are just too stupid to notice this and governments can get away with keeping the laws in place, so they do.


Free Universe

Free universe (also "open" universe) is a free culture ("free as in freedom") fictional universe that serves as a basis/platform for creating art works such as stories in forms of books, movies or video games. Such a universe provides a consistent description of a fictional world which may include its history and lore, geography, characters, laws of physics, languages, themes and art directions, and possibly also assets such as concept art, maps, music, even ready-to-use 3D video game models etc. A free universe is essentially the same kind of framework which is provided by proprietary universes such as those of Start Wars or Pokemon, with the exception that free universe is free/"open", i.e. it comes with a free license and so allows anyone to use it in any way without needing explicit permission; i.e. anyone can set own stories in the universe, expand on it, fork it, use its characters etc. (possibly under conditions that don't break the rules of free culture). The best kind of free universe is a completely public domain one which imposes absolutely no conditions on its use. The act of creating fictional universes is called world building.

But if anyone is allowed to do anything with the universe and so possibly incompatible works may be created, then what is canon?! Well, anything you want -- it's the same as with proprietary universes, regardless of official canon there may be different groups of fans that disagree about what is canon and there may be works that contradict someone's canon, there is no issue here.

Existing free universes: existence of a serious project aiming purely for the creation of a free universe is unknown to us, however free universes may be spawned as a byproduct of other free works -- for example old public domain books of fiction, such as Flatland, or libre games such as FLARE, Anarch or FreeDink create a free universe. If you want to start a free universe project, go for it, it would be highly valued!


Free Will

Sorry, there is no magic unicorn in your head.

Free will is a logically erroneous egocentric belief that humans (and possibly other living beings) are special in the universe by possessing some kind of soul which may disobey laws of physics and somehow make spontaneous, unpredictable decisions according to its "independent" desires. Actually that's the definition of absolute free will; weaker definitions, e.g. for the purposes of law, are possible and acceptable. But here we'll focus on the philosophical definition as that's what most autism revolves around. The Internet (and even academic) debates of free will are notoriously retarded to unbelievable levels, similarly to e.g. debates of consciousness.

{ Sabine nicely explains it here https://yewtu.be/watch?v=zpU_e3jh_FY. ~drummyfish }

Free will is usually discussed in relation to determinism, an idea of everything (including human thought and behavior) being completely predetermined from the start of the universe. Determinism is the most natural and most likely explanation for the working of our universe; it states that laws of nature dictate precisely which state will follow from current state and therefore everything that will every happen is only determined by the initial conditions (start of the universe). As human brain is just matter like any other, it is no exception to the laws of nature. Determinism doesn't imply we'll be able to make precise predictions (see e.g. chaos or undecidability), just that everything is basically already set in stone as a kind of unavoidable fate. Basically the only other possible option is that there would be some kind true randomness, i.e. that laws of nature don't specify an exact state to follow from current state but rather multiple states out of which one is "taken" at random -- this is proposed by some quantum physicists as quantum physics seems to be showing the existence of inherent randomness. Nevertheless quantum physics may still be deterministic, see the theory of hidden variables and superdeterminism (no, Bell test didn't disprove determinism). But EVEN IF the universe is non deterministic, free will still CANNOT exist. Therefore this whole debate is meaningless.

Why is there no free will? Because it isn't logically possible, just like e.g. the famous omnipotent God (could he make a rock so heavy he wouldn't be able to lift it?). Either the universe is deterministic and your decisions are already predetermined, or there exists an inherent randomness and your decisions are determined by a mere dice roll (which no one can call a free will more than just making every decision in life based on a coin toss). In either case your decisions are made for you by something "external". Even if you follow a basic definition of free will as "acting according to one's desires", you find that your decisions are DETERMINED by your desires, i.e. something you did not choose (your desires) makes decisions for you. There is no way out of this unless you reject logic itself.

For some reason retards (basically everyone) don't want to accept this, as if accepting it changed anything, stupid capitalists think that it would somehow belittle their "achievements" of what? Basically just like the people who used to let go of geocentrism. This is ridiculous, they hold on to the idea of their "PRECIOOOOUUUSS FREE WILL" to the death, then they go and consume whatever a TV tells them to consume. Indeed one of the most retarded things in the universe.



FSF stands for Free Software Foundation, a non-profit organization established by Richard Stallman with the goal of promoting and supporting free as in freedom software, software that respects its users' freedom.



In September 2019 Richard Stallman, the founder and president of the FSF, was cyberbullied and cancelled by SJW fascists for simply stating a rational but unpopular opinion on child sexuality and was forced to resign as a president. This might have been the last nail in the coffin for the FSF. The new president would come to be Geoffrey Knauth, an idiot who spent his life writing proprietary software in such shit as C# and helped built military software for killing people (just read his cv online). What's next, a porn actor becoming the next Pope? Would be less surprising.

After this the FSF definitely died.



Function is a very basic term in mathematics and programming with a slightly different meanings in each: mathematical function maps numbers to other numbers, a function in programming is a subprogram to which we divide a bigger program. Well, that's pretty simplified but those are the basic ideas. A more detailed explanation will follow.

Mathematical Functions

In mathematics functions can be defined and viewed from different angles, but it is essentially anything that assigns each member of some set A (so called domain) exactly one member of a potentially different set B (so called codomain). A typical example of a function is an equation that from one "input number" computes another number, for example:

f(x) = x / 2

Here we call the function f and say it takes one parameter (the "input number") called x. The "output number" is defined by the right side of the equation, x / 2, i.e. the number output by the function will be half of the parameter (x). The domain of this function (the set of all possible numbers that can be taken as input) is the set of real numbers and the codomain is also the set of real numbers. This equation assigns each real number x another real number x / 2, therefore it is a function.

{ I always imagined functions as kind of little boxes into which we throw a number and another number falls out. ~drummyfish }

Now consider a function f2(x) = 1 - 1 / x. Note that in this case the domain is the set of real numbers minus zero; the function can't take zero as an input because we can't divide by zero. The codomain is the set of real numbers minus one because we can't ever get one as a result.

Another common example of a function is the sine function that we write as sin(x). It can be defined in several ways, commonly e.g. as follows: considering a right triangle with one of its angles equal to x radians, sin(x) is equal to the ratio of the side opposing this angle to the triangle hypotenuse. For example sin(pi / 4) = sin(45 degrees) = 1 / sqrt(2) ~= 0.71. The domain of sine function is again the set of real number but its codomain is only the set of real numbers between -1 and 1 because the ratio of said triangle sides can never be negative or greater than 1, i.e. sine function will never yield a number outside the interval <-1,1>.

Note that these functions have to satisfy a few conditions to really be functions. Firstly each number from the domain must be assigned exactly one number (although this can be "cheated" by e.g. using a set of couples as a codomain), even though multiple input numbers can give the same result number. Also importantly the function result must only depend on the function's parameter, i.e. the function mustn't have any memory or inside state and it mustn't depend on any external factors (such as current time) or use any randomness (such as a dice roll) in its calculation. For a certain argument (input number) a function must give the same result every time. For this reason not everything that transforms numbers to other numbers can be considered a function.

Functions can have multiple parameters, for example:

g(x,y) = (x + y) / 2

The function g computes the average of its two parameters, x and y. Formally we can see this as a function that maps elements from a set of couples of real numbers to the set of real numbers.

Of course function may also work with just whole numbers, also complex numbers, quaternions and theoretically just anything crazy like e.g. the set of animals :) However in these "weird" cases we generally no longer use the word function but rather something like a map. In mathematical terminology we may hear things such as a real function of a complex parameter which means a function that takes a complex number as an input and gives a real number result.

To get better overview of a certain function we may try to represent it graphically, most commonly we make function plots also called graphs. For a function of a single parameter we draw graphs onto a grid where the horizontal axis represents number line of the parameter (input) and the vertical axis represents the result. For example plotting a function f(x) = ((x - 1) / 4)^2 + 0.8 may look like this:

'.._     |          
  -2 -1  |0 1  2

This is of course done by plotting various points [x,f(x)] and connecting them by a line.

Plotting functions of multiple parameters is more difficult because we need more axes and get to higher dimensions. For functions of 2 parameters we can draw e.g. a heightmap or create a 3D model of the surface which the function defines. 3D functions may in theory be displayed like 2D functions with added time dimension (animated) or as 3D density clouds. For higher dimensions we usually resort to some kind of cross-section or projection to lower dimensions.

Functions can have certain properties such as:

In context of functions we may encounter the term composition which simply means chaining the functions. E.g. the composition of functions f(x) and g(x) is written as (f o g)(x) which is the same as f(g(x)).

Calculus is an important mathematical field that studies changes of continuous functions. It can tell us how quickly functions grow, where they have maximum and minimum values, what's the area under the line in their plot and many other things.

Notable Mathematical Functions

Functions commonly used in mathematics range from the trivial ones (such as the constant functions, f(x) = constant) to things like trigonometric functions (sine, cosine, tangent, ...), factorial, logarithm, logistic sigmoid function, Gaussian function etc. Furthermore some more complex and/or interesting functions are (the term function may be applied liberally here):

Programming Functions

In programming the definition of a function is less strict, even though some languages, namely functional ones, are built around purely mathematical functions -- for distinction we call these strictly mathematical functions pure. In traditional languages functions may or may not be pure, a function here normally means a subprogram which can take parameters and return a value, just as a mathematical function, but it can further break some of the rules of mathematical functions -- for example it may have so called side effects, i.e. performing additional actions besides just returning a number (such as modifying data in memory which can be read by others, printing something to the screen etc.), or use randomness and internal states, i.e. potentially returning different numbers when invoked (called) multiple times with exactly the same arguments. These functions are called impure; in programming a function without an adjective is implicitly expected to be impure. Thanks to allowing side effects these functions don't have to actually return any value, their purpose may be to just invoke some behavior such as writing something to the screen, initializing some hardware etc. The following piece of code demonstrates this in C:

int max(int a, int b, int c) // pure function
  return (a > b) ? (a > c ? a : c) : (b > c ? b : c);

unsigned int lastPresudorandomValue = 0;

unsigned int pseudoRandom(unsigned int maxValue) // impure function
  lastPresudorandomValue = // side effect: working with global variable
    lastPresudorandomValue * 7907 + 7;
  return (lastPresudorandomValue >> 2) % (maxValue + 1);

In older languages functions were also called procedures or routines. Sometimes there was some distinction between them, e.g. in Pascal functions returned a value while procedures didn't.



See also lmao.

Fun is a rewarding lighthearted satisfying feeling you get as a result of doing or witnessing something playful.

Things That Are Fun




Furriness is a weird mental disorder (dolphi will forgive :D) and fetish that makes people dig and/or identify as human-like furry animals, e.g. cats, foxes or completely made up species. To a big degree it's a sexual identity but these people just pretend they're animals everywhere, they have furry conventions, you see their weird furry talk in issue trackers on programming websites etc. You cannot NOT meet a furry on the Internet.

In the past we might have been wondering whether by 2020 we'd already have cured cancer, whether we'd have cities on Mars and flying cars. Well no, but you can sexually identify as a fox now.

Furries seem to have a very harmful obsession with copyrighting their art, many create their own "fursonas" or "species" and then prohibit others from using them.

See Also


Future-Proof Technology

Future-proof technology is technology that is very likely to stay functional for a very long time with minimal to no maintenance. This feature is generally pretty hard to achieve and today's consoomerist society makes the situation much worse by focusing on immediate profit without long-term planning and by implementing things such as bloat and planned obsolescence.

A truly good technology is trying to be future-proof because this saves us the great cost of maintenance and reinventing wheels.

Despite the extremely bad situation not all hope is lost. At least in the world of software future-proofing can be achieved by:

See Also


Game Engine

Game engine is a software, usually a framework or a library, that serves as a base code for games. Such an engine may be seen as a platform allowing portability and offering preprogrammed functionality often needed in games (3D rendering, physics engine, I/O, networking, AI, audio, scripting, ...) as well as tools used in game development (level editor, shader editor, 3D editor, ...).

A game engine differs from a general multimedia engine/library, such as SDL, by its specific focus on games. It is also different from generic rendering engines such as 3D engines like OpenSceneGraph because games require more than just rendering (audio, AI, physics, ...). While one may use some general purpose technology such as C or SDL for creating a game, using a game engine should make the process easier. However, beware of bloat that plagues most mainstream game engines. LRS advises against use of any frameworks, so try to at worst use a game library. Many game programmers such as Jonathan Blow advocate and practice writing own engines for one's games.

Existing Engines

The following are some notable game engines.



In computer context game (also gayme, video game or vidya) is software whose main purpose is to be played and entertain the user. Of course, we can additionally talk about real life games such as marble racing. Game is also a mathematical term in game theory. Sadly most computer games are proprietary and toxic.

Among suckless software proponents there is a disagreement about whether games are legit software or just a meme and harmful kind of entertainment. The proponents of the latter argue something along the lines that technology is only for getting work done, that games are for losers, that they hurt productivity, are an unhealthy addiction, wasted time and effort etc. Those who like games see them as a legitimate form of relaxation, a form of art and a way of advancing technology along the way. The truth is that developing games leads to improvement of other kinds of software, e.g. for rendering, physics simulation or virtual reality. We, LRS, fully accepts games as legitimate software; of course as long as their purpose is to help all people, i.e. while we don't reject games as such, we reject most games the industry produces nowadays.

Despite arguments about the usefulness of games, most people agree on one thing: that the mainstream AAA games produced by big corporations are harmful, bloated, toxic, badly made and designed to be highly malicious, consumerist products. They are one of the worst cases of capitalist software. Such games are never going to be considered good from our perspective (and even the mainstream is turning towards classifying modern games as shit).

PC games are mostly made for and played on MS Windows which is still the "gaming OS", even though in recent years we've seen a boom of "Linux gaming", possibly thanks to Windows getting shittier and shittier every year. However, most games, even when played on GNU/Linux, are still proprietary, capitalist and bloated as hell.

We might call this the great tragedy of games: the industry has become similar to the industry of drug abuse. Games feel great and can become very addictive, especially to people not aware of the dangers (children). Today not playing latest games makes you left out socially, out of the loop, a weirdo. Therefore contrary to the original purpose of a game -- that of making life better and bringing joy -- an individual "on games" from the capitalist industry will crave to constantly consume more and more "experiences" that get progressively more expensive to satisfy. This situation is purposefully engineered by the big game producers who exploit psychological and sociological phenomena to enslave gamers and make them addicted. Games become more and more predatory and abusive and of course, there are no moral limits for corporations of how far they can go: games with microthefts and lootboxes, for example, are similar to gambling, and are often targeted at very young children. The game industry cooperates with the hardware and software industry to together produce a consumerist hell in which one is required to constantly update his hardware and software and to keep spending money just to stay in. The gaming addiction is so strong that even the FOSS people somehow create a mental exception for games and somehow do not mind e.g. proprietary games even though they otherwise reject proprietary software. Even most of the developers of free software games can't mentally separate themselves from the concepts set in place by capitalist games, they try to subconsciously mimic the toxic attributes of such games (bloat, unreasonably realistic graphics and hardware demands, content consumerism, cheating "protection", language filters, ...).

Therefore it is crucial to stress that games are technology like any other, they can be exploiting and abusive, and so indeed all the high standards we hold for other technology we must also hold for games. Too many people judge games solely by their gameplay. For us at LRS gameplay is but one attribute, and not even the one standing at the top; factors such as software freedom, cultural freedom, sucklessness, good internal design and being future proof are even more important.

A small number of games nowadays come with a free engine, which is either official (often retroactively freed by its developer in case of older games) or developed by volunteers. Example of the former are the engines of ID games (Doom, Quake), example of the latter can be OpenMW (a free engine for TES: Morrowind) or Mangos (a free server for World of Warcraft). Console emulators (such as of Playstation or Gameboy) can also be considered a free engine for playing proprietary games.

Yet a smaller number of games are completely free (in the sense of Debian's free software definition), including both the engine and game assets. These games are called free games or libre games and many of them are clones of famous proprietary games. Examples of these probably (one can rarely ever be sure about legal status) include SuperTuxKart, Minetest, Xonotic, FLARE or Anarch. There exists a wiki for libre games at https://libregamewiki.org and a developer forum at https://forum.freegamedev.net/. Libre games can also be found in Debian software repositories. However WATCH OUT, all mentioned repositories may be unreliable!

{ NOTE: Do not blindly trust libregamewiki and freegamedev forum, non-free games ocassionaly DO appear there by accident, negligence or even by intention. I've actually found that most of the big games like SuperTuxKart have some licensing issues (they removed one proprietary mascot from STK after my report). Ryzom has been removed after I brought up the fact that the whole server content is proprietary and secret. So if you're a purist, focus on the simpler games and confirm their freeness yourself. Anyway, LGW is a good place to start looking for libre games. It is much easier to be sure about freedom of suckless/LRS games, e.g. Anarch is legally safe practically with 100% certainty. ~drummyfish }

Some games are pretty based as they don't even require GUI and are only played in the text shell (either using TUI or purely textual I/O) -- these are called TTY games or command line games. This kind of games may be particularly interesting to minimalists, hobbyists and developers with low (zero) budget, little spare time and/or no artistic skills. Roguelike games are especially popular here; there sometimes even exist GUI frontends which is pretty neat -- this demonstrates how the Unix philosophy can be applied to games.

Another kind of cool games are computer implementations of pre-computer games, for example chess, backgammon, go or various card games. Such games are very often well tested and fine-tuned gameplay-wise, popular with active communities and therefore fun, yet simple to program with many existing free implementations and good AIs (e.g. GNU chess, GNU go or Stockfish).

Games As LRS

Games can be suckless and just as any other software should try to adhere to the Unix philosophy. A LRS game should follow all the principles that apply to any other kind of such software, for example being completely public domain or aiming for high portability. This is important to mention because, sadly, many people see games as some kind of exception among software and think that different technological or moral rules apply -- this is wrong.

If you want to make a simple LRS game, there is an official LRS C library for it: SAF.

Compared to mainstream games, a LRS game shouldn't be a consumerist product, it should be a tool to help people entertain themselves and relieve their stress. From the user perspective, the game should be focused on the fun and relaxation aspect rather than impressive visuals (i.e. photorealism etc.), i.e. it will likely utilize simple graphics and audio. Another aspect of an LRS game is that the technological part is just as important as how the game behaves on the outside (unlike mainstream games that have ugly, badly designed internals and mostly focus on rapid development and impressing the consumer with visuals).

The paradigm of LRS gamedev differs from the mainstream gamedev just as the Unix philosophy differs from the Window philosophy. While a mainstream game is a monolithic piece of software, designed to allow at best some simple, controlled and limited user modifications, a LRS game is designed with forking, wild hacking, unpredictable abuse and code reuse in mind.

Let's take an example. A LRS game of a real-time 3D RPG genre may for example consist of several independent modules: the RPG library, the game code, the content and the frontend. Yes, a mainstream game will consist of similar modules, however those modules will probably only exist for the internal organization of work and better testing, they won't be intended for real reuse or wild hacking. With the LRS RPG game it is implicitly assumed that someone else may take the 3D game and make it into a purely non-real-time command line game just by replacing the frontend, in which case the rest of the code shouldn't be burdened by anything 3D-rendering related. The paradigm here should be similar to that existing in the world of computer chess where there exist separate engines, graphical frontends, communication protocols, formats, software for running engine tournaments, analyzing games etc. Roguelikes and the world of quake engines show some of this modularity, though not in such a degree we would like to see -- LRS game modules may be completely separate projects and different processes communicating via text interfaces through pipes, just as basic Unix tools do. We have to think about someone possibly taking our singleplayer RPG and make it into an MMORPG. Someone may even take the game and use it as a research tool for machine learning or as a VFX tool for making movies, and the game should be designed so as to make this as easy as possible -- the user interface should be very simple to be replaced by an API for computers. The game should allow easy creation of tool assisted speedruns, to record demos, to allow scripting, modifying ingame variables, even creating cheats etc. And, importantly, the game content is a module as well, i.e. the whole RPG world, its lore and storyline is something that can be modified, forked, remixed, and the game creator should bear this in mind.

Of course, LRS games must NOT contain such shit as "anti-cheating technology". For our stance on cheating, see the article about it.

Types Of Games

Besides dividing games as any other software (free vs proprietary, suckless vs bloat, ...) we can further divide them by the following:

Legal Matters

Thankfully gameplay mechanisms cannot (yet) be copyrighted (however some can sadly be patented) so we can mostly happily clone proprietary games and so free them. However this must be done carefully as there is a possibility of stepping on other mines, for example violating a trade dress (looking too similar visually) or a trade mark (for example you cannot use the word tetris as it's owned by some shitty company) and also said patents (for example the concept of minigames on loading screens used to be patented in the past).

Trademarks have been known to cause problems in the realm of libre games, for example in the case of Nexuiz which had to rename to Xonotic after its original creator trademarked the name and started to make trouble.

Some Nice Gaymes

Anarch and microTD are examples of games trying to strictly follow the less retarded principles. SAF is a less retarded game library/fantasy console which comes with some less retarded games such as microTD.

{ I recommend checking out Xonotic, it's completely libre and one of the best games I've ever played. ~drummyfish }

See Also



Homosexuality is a sexual orientation and disorder which makes individuals sexually attracted primarily to the same sex. A homosexual individual is called gay, homo or even faggot (females are called lesbians). About 4% of people suffer from homosexuality.

For an unenlightened reader coming from the brainwashland: this article is not "offensive", it is just telling uncensored truth. Keep calm as we, LRS, are not advocating any discrimination, on the contrary we advocate absolute social equality and love of all living beings. Your indoctrination has made you equate political incorrectness with oppression, to see the truth, you have to unlearn this -- see for example our FAQ.

Unlike e.g. pedophilia and probably also bisexuality, pure homosexuality is NOT normal, it is a disorder -- of course the meaning of the word disorder is highly debatable, but pure homosexuality is firstly pretty rare (being gay is as rare as e.g. having IQ < 75), and secondly from the nature's point of view gay people wouldn't naturally reproduce, their condition is therefore equivalent to any other kind of sterility, which we most definitely would call a defect -- not necessarily a defect harmful to society (there are enough people already), but nonetheless a defect from biological point of view.

Gay behavior is also usually pretty weird, male homos are very feminine and talk in high pitched voice, lesbians are masculine, have short pink hair, often also aggressive nature and identity crisis manifested by tattoos etc. Most normal people naturally find this disgusting but are afraid to say it because of political correctness and fear of being lynched. You can usually safely tell someone's gay just from his body language and/or appearance. Gay people are also more inclined towards art and other sex's activities, for example gay guys are often hair dressers or even ballet dancers.

There is a terrorist fascist organization called LGBT aiming to make gay people superior to others, but more importantly to gain political power -- e.g. the power over language.

Even though homosexuality is largely genetically determined, it is also to a great extent a choice, sometimes a choice that's not of the individual in question. Most people are actually bisexual to a considerable degree, with a preference of certain sex. When horny, you'd fuck pretty much anything. Still there is a certain probability in each individual of choosing one or the other sex for a sexual/life partner. However culture and social pressure can push these probabilities in either way. If a child grows up in a major influence of YouTubers and other celebrities that openly are gay, or promote gayness as something extremely cool and fashionable, if all your role models are gay and your culture constantly paints being homosexual as being more interesting and somehow "brave" and if the competition of sexes fueled e.g. by the feminist propaganda paints the opposite sex as literal Hitler, the child has a greater probability of (maybe involuntarily) choosing the gay side of his sexual personality. This has certainly been happening in times when homosexuality was illegal, many gay people were forced to behave as heterosexuals and though many suffered, many have also lived quite OK and even happy lives -- nowadays the trend is opposite, being straight means being discriminated and society is forcing straight people to gayness.

Of course, we have nothing against gay people as we don't have anything against people with any other disorder -- we love all people equally. But we do have an issue with any kind of terrorist organization, so while we are okay with homosexuals, we are not okay with LGBT.

Are you gay? How can you tell? In doing so you should actually NOT be guided by your sexual desires -- as has been said, most people are bisexual and in sex it many times holds that what disgusts you normally turns you on when you're horny, i.e. if you're a guy and would enjoy sucking a dick, you're not necessarily gay, it may be pure curiosity or just the desire of "forbidden fruit"; this is quite normal. Whether you're gay is probably determined by what kind of LIFE partner you'd choose, i.e. what sex you can fall in a ROMANTIC relationship with. If you're a guy and fall in love with another guy -- i.e. you're passionate just about being with that guy (even in case you couldn't have sex with him) -- you're probably gay. (Of course this holds the other way around too: if you're a guy and enjoy playing with tits, you may not necessarily be straight.)






Geek is a wannabe nerd, it's someone who wants to identify with being smart rather than actually being smart. Geeks are basically what used to be called a smartass in the old days -- overly confident conformists occupying mount stupid who think soyence is actual science, they watch shows like Rick and Morty and Big Bang Theory, they browse Rational Wiki and reddit -- especially r/atheism, and they make appearances on r/iamverysmart -- they wear T-shirts with cheap references to 101 programming concepts and uncontrollably laugh at any reference to number 42, they think they're computer experts because they know the word Linux, managed to install Ubuntu or drag and drop programmed a "game" in Godot. Geeks don't really have their own opinions, they just adopt opinions presented on 9gag, they are extremely weak and don't have extreme views. They usually live the normal conformist life, they have friends, normal day job, wife and kids, but they like to say they "never fit it" -- a true nerd is living in a basement and doesn't meet any people, he lives on the edge of suicide and doesn't nearly complain as much as the "geek".



Gemini is a shitty pseudominimalist network protocol for publishing, browsing and downloading files, a simpler alternative to the World Wide Web and a more complex alternative to gopher (by which it was inspired). It is a part of so called Smol Internet. Gemini aims to be a "modern take on gopher", adding some new "features" and a bit of bloat. The project states it wants to be something in the middle between Web and gopher but doesn't want to replace either.

On one hand Gemini is kind of cool but on the other hand it's pretty shit, especially by REQUIRING the use of TLS encryption for "muh security" because the project was made by privacy freaks that advocate the ENCRYPT ABSOLUTELY EVERYTHIIIIIING philosophy. This is firstly mostly unnecessary (it's not like you do Internet banking over Gemini) and secondly adds a shitton of bloat and prevents simple implementations of clients and servers. Some members of the community called for creating a non-encrypted Gemini version, but that would basically be just gopher. Not even the Web goes as far as REQUIRING encryption, so it may be better and easier to just create a simple web 1.0 website rather than a Gemini capsule. And if you want ultra simplicity, we highly advocate to instead prefer using gopher which doesn't suffer from the mentioned issue.


Gender Studies

what the actual fuck



Gigachad is like chad, only more so. He has an ideal physique and makes women orgasm merely by looking at them.



See femoid.



Githopping is a disease similar to distrohopping but applied to git hosting websites. The disease has become an epidemics after the Micro$oft's take over of GitHub when people started protest-migrating to GitLab, however GitLab became shit as well so people started hopping to other services like Codeberg etcetc. and now they are addicted to just copying their code from one site to another instead of doing actual programming.

Cure: free yourself of any git hosting, don't centralize your repos on one hosting, use multiple git hostings as mirrors for your code, i.e. add multiple push remotes to your local git and with every push update your repos all over the internet. Just spray the internet with your code and let it sink in, let it be captured in caches and archive sites and let it be preserved. DO NOT tie yourself to any specific git hosting by using any non-git features such as issue trackers or specialized CLI tools such as github cli. DO NOT use git hosting sites as a social network, just stop attention whoring for stars and likes, leave this kind of shit to tiktokers.


Global Discussion

This is a place for general discussion about anything related to our thing. To comment just edit-add your comment. I suggest we use a tree-like structure as shows this example:

If the tree gets too big we can create a new tree under a new heading.

General Discussion



GNU ("GNU is Not Unix", a recursive acronym) is a large project started by Richard Stallman, the inventor of free (as in freedom) software, running since 1983 with the goal of creating a completely free (as in freedom) operating system, along with other free software that computer users might need. The project doesn't tolerate any proprietary software. The project achieved its goal of creating a complete operating system when a kernel named Linux became part of it in the 90s as the last piece of the puzzle -- the system is now known as GNU/Linux. However, the GNU project didn't end and continues to further develop the operating system as well as a myriad of other software projects it hosts. GNU gave rise to the Free Software Foundation and is one of the most important software projects in history of computing.

The mascot of GNU is literally gnu (wildebeest).

The GNU/Linux operating system has several variants in a form of a few GNU approved "Linux" ditributions such as Guix, Trisquel or Parabola. Most other "Linux" distros don't meet the strict standards of GNU such as not including any proprietary software. In fact the approved distros can't even use the standard version of Linux because that contains proprietary blobs, a modified variant called Linux-libre has to be used.

GNU greatly prefers GPL licenses, i.e. it strives for copyleft, even though it accepts even projects under permissive licenses. GNU also helps with enforcing these licenses legally and advises developers to transfer their copyright to GNU so that they can "defend" the software for them.

Although GNU is great and has been one of the best things to happen in software ever, it has its flaws. For example their programs are known to be kind of a bloat, at least from the strictly suckless perspective. It also doesn't mind proprietary non-functional data (e.g. assets in video games) and their obsession with copyleft also isn't completely aligned with LRS.



GNU Projects

GNU has developed an almost unbelievable amount of software, it has software for all basic and some advanced needs. As of writing this there are 373 software packages in the official GNU repository (at https://directory.fsf.org/wiki/Main_Page). Below are just a few notable projects under the GNU umbrella.

See Also



Go is a compiled programming language advertised as the the "modern" C and is co-authored by one of C's authors, Ken Thompson. Neverheless Go is actually shit compared to C. Some reasons for this are:

Anyway, it at least tries to stay somewhat simple in some areas and as such is probably better than other modern languages like Rust. It purposefully omits features such as generics or static type conversions, which is good.


Goodbye World

Goodbye world is a program that is in some sense an opposite of the traditional hello world program. What exactly this means is not strictly given, but some possibilities are:


Good Enough

A good enough solution to a problem is a solution that solves the problem satisfyingly (not necessarily precisely or completely) while achieving minimal cost (effort, implementation time etc.). This is in contrast to looking for a better solutions for a higher cost. For example a tent is a good enough accommodation solution while a luxury house is a better solution (more comfortable, safe, ...) for a higher cost.

To give an example from the world of programming, bubble sort is in many cases better than quick sort for its simplicity, even though it's much slower.

In technology we are often times looking for good enough solution to achieve minimalism and save valuable resources (computational resources, programmer time etc.). It rarely makes sense to look for solutions that are more expensive than they necessarily need to be, however in the context of capitalist software we see this happen many times as price is artificially and intentionally driven up for economic reasons (e.g. increasing the cost of maintenance of a software eliminates any competition that can't afford such cost). This is only natural in capitalism, we see the tendency for wasting resources everywhere. This needs to be stopped.



Google is one the very top big tech corporations, as well as one of the worst corporations in history (if not THE worst), comparable only to Micro$oft and Facebook. Google is gigantically evil and largely controls the Internet, pushes mass surveillance, personal data collection and abuse, ads, bloat, fascism and censorship.

Google's motto used to be "Don't be evil", but in 2018 they ditched it lol xD

Google raised to the top thanks to its search engine launched in the 90s. It soon got a monopoly on the Internet search and started pushing ads. Nowadays Google's search engine basically just promotes "content" on Google's own content platforms such as YouTube and of course censors sites deemed politically incorrect.

Besides heavily biasing web search results towards Google's own and friendly platforms, Google also heavily censors the search results and won't show links to prohibited sites unless you literally very specifically show that you want to find a prohibited site you already know of, for example you won't find results leading to Metapedia or Encyclopedia Dramatica unless you literally search for the url of those sites of long verbatim phrases they contain -- this is a trick played on those who "test" Google which at is mean to make it look as if Google actually isn't censored, however it is of course censored because the only people who will ever find the prohibited sites and their content are people who already know about it and are specifically searching for it just to test Google's censorship. { EDIT: tho Google also seems to refuse to give some URLs no matter what, e.g. https://infogalactic.com. Just tested it. ~drummyfish } If you intend to truly search the Internet, don't rely on Google's results but search with multiple engines (that have their own index) such as Mojeek, Yandex, Right Dao, wiby, YaCy, Qwant etc. (and of course search the darknet).

Google has created a malicious capitalist mobile "operating system" called Android, which they based on Linux with which they managed to bypass its copyleft by making Android de-facto dependent on their proprietary Play Store and other programs. I.e. they managed to take a free project and make a de-facto proprietary malware out of it -- a system that typically doesn't allow users to modify its internals and turn off its malicious features. With Android they invaded a huge number of devices from cells phones to TVs and have the ability to spy on the users of these devices.

Google also tries to steal the public domain: they scan and digitize old books whose copyright has expired and put the on the Internet archive, however in these scans they put a condition that the scans should not be used for commercial purposes, i.e. they try to keep exclusive commercial right for public domain works, something they have no right to do at all.



Gopher is a network protocol for publishing, browsing and downloading files and is known as a much simpler alternative to the World Wide Web (i.e. to HTTP and HTML). In fact it competed with the Web in its early days and even though the Web won in the mainstream, gopher still remains used by a small community. Gopher is like the Web but well designed, it is the suckless/KISS way of doing what the Web does, it contains practically no bloat and so we highly advocate its use. Gopher inspired creation of Gemini, a similar but bit more complex and "modern" protocol, and the two together have recently become the main part of so called Smol Internet.

As of 2022 the Veronica search engine reported 343 gopher servers in the world with 5+ million indexed selectors.

Gopher doesn't use any encryption. This is good, encryption is bloat. Gopher also only uses ASCII, i.e. there's no Unicode. That's also good, Unicode is bloat (and mostly serves trannies to insert emojis of pregnant men into readmes, we don't need that). Gopher simple design is intentional, the authors deemed simplicity a good feature. Gopher is so simple that you may very well write your own client and server and comfortably use them (it is also practically possible to browse gopher without a specialized client, just with standard Unix CLI tools).

From the user's perspective the most important distinction from the Web is that gopher is based on menus instead of "webpages"; a menu is simply a column of items of different predefined types, most importantly e.g. a text file (which clients can directly display), directory (link to another menu), text label (just shows some text), binary file etc. A menu can't be formatted or visually changed, there are no colors, images, scripts or hypertext -- a menu is not a presentation tool, it is simply a navigation node towards files users are searching for (but the mentioned ASCII art and label items allow for somewhat mimicking "websites" anyway). Addressing works with URLs just as the Web, the URLs just differ by the protocol part (gopher:// instead of http://), e.g.: gopher://gopher.floodgap.com:70/1/gstats. What on Web is called a "website" on gopher we call a gopherhole (i.e. a collection of resources usually under a single domain) and the whole gopher network is called a gopherspace. Blogs are common on gopher and are called phlogs (collectively a phlogosphere). As menus can refer to one another, gopher creates something akin a global file system, so browsing gopher is like browsing folders and can comfortably be handled with just 4 arrow keys. Note that as menus can link to any other menu freely, the structure of the "file system" is not a tree but rather a general graph. Another difference from the Web is gopher's great emphasis on plaintext and ASCII art as it cannot embed images and other media in the menus (even though of course the menus can link to them). There is also a support for sending text to a server so it is possible to implement search engines, guest books etc.

Gopher is just an application layer protocol (officially running on port 70 assigned by IANA), i.e it sits above lower layer protocols like TCP and takes the same role as HTTP on the Web and so only defines how clients and servers talk to each other -- the gopher protocol doesn't say how menus are written or stored on servers. Nevertheless for the creation of menus so called gophermaps have been established, which is a simple format for writing menus and are the gopher equivalent of Web's HTML files (just much simpler, basically just menu items on separate lines, the exact syntax is ultimately defined by server implementation). A server doesn't have to use gophermaps, it may be e.g. configured to create menus automatically from directories and files stored on the server, however gophermaps allow users to write custom menus manually. Typically in someone's gopherhole you'll be served a welcoming intro menu similar to a personal webpage that's been written as a gophermap, which may then link to directiories storing personal files or other hand written menus. Some gopher servers also allow creating dynamic content with scripts called moles.

Gopher software: sadly "modern" browsers are so modern they have millions of lines of code but can't be bothered to support such a trivial protocol like gopher, however there are Web proxies you can use to explore gopherspace. Better browsers such as lynx (terminal) or forg (GUI) can be used for browsing gopherspace natively. As a server you may use e.g. Gophernicus (used by SDF) or search for another one, there are dozens. For the creation of gophermaps you simply use a plaintext editor. Where to host gopher? Pubnixes such as SDF, tilde.town and Circumlunar community offer gopher hosting but many people simply self-host servers e.g. on Raspberry Pis, it's pretty simple.




Computer Graphics

Computer graphics (CG or just graphics) is a field of computer science that deals with visual information. The field doesn't have strict boundaries and can blend and overlap with other possibly separate topics such as physics simulations, multimedia and machine learning. It usually deals with creating or analyzing 2D and 3D images and as such CG is used in data visualization, game development, virtual reality, optical character recognition and even astrophysics or medicine.

We can divide computer graphics in different ways, traditionally e.g.:

Since the 90s computers started using a dedicated hardware to accelerate graphics: so called graphics processing units (GPUs). These have allowed rendering of high quality images in high FPS, and due to the entertainment and media industry (especially gaming), GPUs have been pushed towards greater performance each year. Nowadays they are one of the most consumerist hardware, also due to the emergence of general purpose computations being moved to GPUs (GPGPU), lately especially mining of cryptocurrencies and training of AI. Most lazy programs dealing with graphics nowadays simply expect and require a GPU, which creates a bad dependency and bloat. At LRS we try to prefer the suckless software rendering, i.e. rendering on the CPU, without GPU, or at least offer this as an option in case GPU isn't available. This many times leads us towards the adventure of using old and forgotten algorithms used in times before GPUs.

3D Graphics

This is a general overview of 3D graphics, for more technical overview of 3D rendering see its own article.

3D graphics is a big part of CG but is a lot more complicated than 2D. It tries to achieve realism through the use of perspective, i.e. looking at least a bit like what we see in the real world. 3D graphics can very often bee seen as simulating the behavior of light; there exists so called rendering equation that describes how light behaves ideally, and 3D computer graphics tries to approximate the solutions of this equation, i.e. the idea is to use math and physics to describe real-life behavior of light and then simulate this model to literally create "virtual photos". The theory of realistic rendering is centered around the rendering equation and achieving global illumination (accurately computing the interaction of light not just in small parts of space but in the scene as a whole) -- studying this requires basic knowledge of radiometry and photometry (fields that define various measures and units related to light such as radiance, radiant intensity etc.).

In 2010s mainstream 3D graphics started to employ so called physically based rendering (PBR) that tries to yet more use physically correct models of materials (e.g. physically measured BRDFs of various materials) to achieve higher photorealism. This is in contrast to simpler (both mathematically and computationally), more empirical models (such as a single texture + phong lighting) used in earlier 3D graphics.

Because 3D is not very easy (for example rotations are pretty complicated), there exist many 3D engines and libraries that you'll probably want to use. These engines/libraries work on different levels of abstraction: the lowest ones, such as OpenGL and Vulkan, offer a portable API for communicating with the GPU that lets you quickly draw triangles and write small programs that run in parallel on the GPU -- so called shaders. The higher level, such as OpenSceneGraph, work with abstraction such as that of a virtual camera and virtual scene into which we place specific 3D objects such as models and lights (the scene is many times represented as a hierarchical graph of objects that can be "attached" to other objects, so called scene graph).

There is a tiny suckless/LRS library for real-time 3D: small3dlib. It uses software rendering (no GPU) and can be used for simple 3D programs that can run even on low-spec embedded devices. TinyGL is a similar software-rendering library that implements a subset of OpenGL.

Real-time 3D typically uses an object-order rendering, i.e. iterating over objects in the scene and drawing them onto the screen (i.e. we draw object by object). This is a fast approach but has disadvantages such as (usually) needing a memory inefficient z-buffer to not overwrite closer objects with more distant ones. It is also pretty difficult to implement effects such as shadows or reflections in object-order rendering. The 3D models used in real-time 3D are practically always made of triangles (or other polygons) because the established GPU pipelines work on the principle of drawing polygons.

Offline rendering (non-real-time, e.g. 3D movies) on the other hand mostly uses image-order algorithms which go pixel by pixel and for each one determine what color the pixel should have. This is basically done by casting a ray from the camera's position through the "pixel" position and calculating which objects in the scene get hit by the ray; this then determines the color of the pixel. This more accurately models how rays of light behave in real life (even though in real life the rays go the opposite way: from lights to the camera, but this is extremely inefficient to simulate). The advantage of this process is a much higher realism and the implementation simplicity of many effects like shadows, reflections and refractions, and also the possibility of having other than polygonal 3D models (in fact smooth, mathematically described shapes are normally much easier to check ray intersections with). Algorithms in this category include ray tracing or path tracing. In recent years we've seen these methods brought, in a limited way, to real-time graphics on the high end GPUs.



"For every car you consume we plant a tree." --corporations



Graphical User Interface

Graphical user interface (GUI) is a visual user interface that uses graphics such as images and geometrical shapes. This stands in contrast with text user interface (TUI) which is also visual but only uses text for communication.

Expert computer users normally frown upon GUI because it is the "noobish", inefficient, limiting, cumbersome, hard to automate way of interacting with computer. GUI brings complexity and bloat, they are slow, inefficient and distracting. We try not to use them and prefer the command line.

"Modern" GUIs mostly use callback-based programming, which again is more complicated than standard polling non-interactive I/O. If you need to do GUI, just use a normal infinite loop FFS.

When And How To Do GUI

GUI is not forbidden, it has its place, but today it's way too overused -- it should be used only if completely necessary (e.g. in a painting program) or as a completely optional thing built upon a more suckless text interface or API. So remember: first create a program and/or a library working without GUI and only then consider creating an optional GUI frontend. GUI must never be tied to whatever functionality can be implemented without it.

Still, when making a GUI, you can make it suckless and lighthweight. Do your buttons need to have reflections, soft shadows and rounded anti-aliased borders? No. Do your windows need to be transparent with light-refraction simulation? No. Do you need to introduce many MB of dependencies and pain such as QT? No.

The ergonomics and aesthetic design of GUIs has its own field and can't be covered here, but just keep in mind some basic things:

The million dollar question is: which GUI framework to use? Ideally none. GUI is just pixels, buttons are just rectangles; make your GUI simple enough so that you don't need any shitty abstraction such as widget hierarchies etc. If you absolutely need some framework, look for a suckless one; e.g. nuklear is worth checking out. The suckless community sometimes uses pure X11, however that's not ideal, X11 itself is kind of bloated and it's also getting obsoleted by Wayland. The ideal solution is to make your GUI backend agnostic, i.e. create your own very thin abstraction layer above the backend (e.g. X11) so that any other backend can be plugged in if needed just by rewriting a few simple functions of your abstraction layer (see how e.g. Anarch does rendering).


Hacker Culture

See hacking.



Not to be confused with cracking.

Hacking (also hackerdom) in the widest sense means exploiting usually (but not necessarily) a computer system in a clever way. In context of computers the word hacker was originally -- that is in 1960s -- used for very good programmers and people who were simply good with computers, the word hacking had a completely positive meaning; hacker could almost be synonymous with computer genius (at the time people handling computers were usually physicists, engineers or mathematicians), someone who enjoyed handling and programming computers and could playfully look for very clever ways of making them do what he wanted. Over time hackers evolved a whole hacker culture with its own slang, set of values, behavioral and ethical norms, in jokes and rich lore. As time marched on, computer security has started to become an important topic and some media started to use the word hacker for someone breaking into a computer system and so the word gained a negative connotation in the mainstream -- though many refused to accept this new meaning and rather used the word cracker for a "malicious hacker", there appeared new variants such as white hat and black hat hacker, referring to ethical and malicious hackers. With onset of online games the word hacking even became a synonym for cheating. The original positive meaning has recently seen some comeback with popularity of sites such as hacker news or hackaday, the word life hack has even found its way into the non-computer mainstream dictionary, however a "modern hacker" is a bit different from the oldschool hacker, usually for the worse (for example a modern self proclaimed "hacker" has no issue with wearing a suit, something that would be despised by an oldschool hacker). We, LRS, advocate for using the original, oldschool meaning of the word hacker.

Original Hacker Culture

The original hacker culture is a culture of the earliest computer programmers, usually smart but socially rather isolated nerds -- at the time mostly physicists, mathematicians and engineers -- who shared deep love for programming and pure joy of coming up with clever computer tricks, exploration of computers and freely sharing their knowledge and computer programs with each other. The culture started to develop rapidly at MIT in about the second half of 1960s, though other hacker communities existed earlier and in other places as well (still mostly at universities).

The word hack itself seems to have come from a model train club at MIT in whose slang the word referred to something like a project of passion without a specific goal; before this the word was used around MIT for a specific kind of clever but harmless pranks. Members of the model train club came to contact with early computers at MIT and brought their slang along. These early punch-card computers were expensive and sacred, hackers treated them as almost supernatural entities; in the book Hackers it is mentioned that those who were allowed to operate the machines were called Priests -- Priests would often carry out a little prayer to please the machine so that it would bless them with computation. During 60s and 70s so called phreaking -- hacking the phone network -- was popular among hackers.

Many ideas -- such as the beauty of minimalism -- that became part of hacker culture later came from the development of Unix and establishment of its programming philosophy. Many hackers came from the communities revolving around PDP 10 and ARPANET, and later around networks such as Usenet. At the time when computers started to be abused by corporations, Richard Stallman's definition of free software and his GNU project embodied the strong hacker belief in information freedom and their opposition of intellectual property.

The culture has a deep lore and its own literature consisting of books that hackers usually like (e.g. The Hitchhiker's Guide to the Galaxy) and books by hackers themselves. Bits of the lore are in forms of short stories circulated as folklore, very popular form are so called Koans. Perhaps the most iconic hacker story is the Story of Mel which tells a true story of a master hacker keeping to his personal ethical beliefs under the pressure of his corporate employers -- a conflict between manager employers ("suits") and hacker employees is a common theme in the stories. Other famous stories include the TV typewriter and Magic Switch. One of the most famous hacker books is the Jargon File, a collectively written dictionary documenting hacker culture in detail. A 1987 book The Tao of Programming captures the hacker wisdom with Taoist-like texts that show how spiritual hacking can get -- this reflects the above mentioned sacred nature of the early computers. The textfiles website features many text files on hacking at https://textfiles.vistech.net/hacking/. See also Ten Commandments for C Programmers etc. A lot about hackers can be learned from books about them, e.g. the free book Free as in Freedom about Richard Stallman (available e.g. here). A prominent hacker writer is Eric S. Raymond who produced a very famous essay The Cathedral and the Bazaar, edited the Jargon File and has written a large guide called How To Become A Hacker -- these are all good resources on hackerdom, even though Raymond himself is kind of shitty, he for example prefers the "open source" movement to free software.

As a symbol of hackerdom the glider symbol from game of life is sometimes used, it looks like this:


Let us now attempt to briefly summarize what it means to be a hacker:

Let's mention a few people who were at their time regarded by at least some as true hackers, however note that many of them betrayed some of the hacker ways either later in life or even in their young years -- people aren't perfect and no single individual is a perfect example of a whole culture. With that said, those regarded hackers included Melvin Kaye aka Mel, Richard Stallman, Linus Torvalds, Eric S. Raymond, Ken Thompson, Dennis Ritchie, Richard Greenblatt, Bill Gosper, Steve Wozniak or Larry Wall.

"Modern" "Hackers"

Many modern zoomer soydevs call themselved "hackers" but there are basically none that would stay true to the original ethics and culture and be worthy of being called a true hacker, they just abuse the word as a cool term or a brand (see e.g. "hacker" news). It's pretty sad the word has become a laughable parody of its original meaning by being associated with groups such as Anonymous who are just a bunch of 14 year old children trying to look like "movie hackers". The hacker culture has been spoiled basically in the same ways the rest of society, and the difference between classic hacker culture and the "modern" one is similar to the difference between free software and open source, though perhaps more amplified -- the original culture of strong ethics has become twisted by capitalist trends such as self-interest, commercialization, fashion, mainstreamization, even shitty movie adaptations etc. The modern "hackers" are idiots who have never seen assembly, can't do math, they're turds in suits who make startups and work as influencers, they are tech consumers who use and even create bloat, and possibly even proprietary software. For the love of god, do NOT mimic such caricatures or give them attention -- not only are they not real hackers, they are simply retarded attention whores.

Security "Hackers"

Hacker nowadays very often refers to someone involved in computer security either as that who "protects" (mostly by looking for vulnerabilities and reporting them), so called white hat, or that who attacks, so called black hat. Those are not hackers in the original sense, they are hackers in the mainstream adopted meaning of someone breaking into a system. This kind of "hacker" betrays the original culture by supporting secrecy and censorship, i.e. "protection" of "sensitive information" mostly justified by so called "privacy" -- this is violating the original hacker's pursuit of absolute information freedom (note that e.g. Richard Stallman boycotted even the use of passwords at MIT, Raymond discourages from using anonymous handles and rather recommends going by your real name). These people are obsessed with anonymity, encryption, cryptocurrencies, cryptofascism and are also more often than not egoist people with shitty personalities. In addition they don't generally adhere to the original hacker culture in any way either, they are simply people breaking into systems for some kind of self benefit (yes, even the white hats), nothing more than that. Again, do NOT try to mimic these abominations.

Examples Of Hacks

{ As a redditfag I used to follow the r/devtricks subreddit, it contained some nice examples of hacks. ~drummyfish }

A great many commonly used tricks in programming could be regarded as hacks even though many are not called so because they are already well known and no longer innovative, a true hack is something new that impresses fellow hackers. And of course hacks may appear outside the area of technology as well. The following is a list of things that were once considered new hacks or that are good examples demonstrating the concept:

See Also


Hard To Learn, Easy To Master

"Hard to learn, easy to master" is the opposite of "easy to learn, hard to master".

Example: drinking coffee while flying a plane.



The article is here!


Harry Potter

Harry Potter is a franchise and universe by an English female writer J. K. Rowling about wizards and magic { like ACTUAL wizards and magic. ~drummyfish } that started in 1997 as an immensely successful series of seven children and young adult books, was followed by movies and later on by many other spinoff media such as video games. It made J. K. Rowling a billionaire and has become the most famous and successful book series of modern age. At first the books sparked controversies and opposition in religious communities for "promoting witchcraft", in recent years the universe and stories have become a subject of wider political analysis and fights, as most other things.

{ The books are actually good -- not the best in the world, I've read many better ones that would better deserve this kind of attention, but still the work is admirable. There is of course tons of money in the franchise so it's getting raped and milked like any other IP capital -- this is of course spoiling and killing the work, so be careful. ~drummyfish }

Plot summary: sorry, we're not writing a plot summary here, thank copyright laws -- yes, fair use allows us to do it but it would make us non free :) Let's just say the story revolves around a boy named Harry Potter who goes to a wizard school with two friends and they're together saving the world from Lord Voldemort, the wizard equivalent of Hitler. Overall the books start on a very light note and get progressively darker and more adult, turning into a story about "World War II but with wizards'n'magic". It's pretty readable, with great, unique atmosphere, pleasant coziness and elements of many literary genres, there's nice humor and good ideas. Also the lore is very deep.



Hash is a number that's computed from some data in a chaotic way and which is used for many different purposes, e.g. for quick comparisons (instead of comparing big data structures we just compare their hashes) or mapping data structures to table indices.

Hash is computed by a hash function, a function that takes some data and turns it into a number (the hash) that's in terms of bit width much smaller than the data itself, has a fixed size (number of bits) and which has additional properties such as being completely different from hash values computed from very similar (but slightly different) data. Thanks to these properties hashes have a very wide use in computer science -- they are often used to quickly compare whether two pieces of non-small data, such as documents, are the same, they are used in indexing structures such as hash tables which allow for quick search of data, and they find a great use in cryptocurrencies and security, e.g. for digital signatures or storing passwords (for security reasons in databases of users we store just hashes of their passwords, never the passwords themselves). Hashing is extremely important and as a programmer you won't be able to avoid encountering hashes somewhere in the wild.

{ Talking about wilderness, hyenas have their specific smells that are determined by bacteria in them and are unique to each individual depending on the exact mix of the bacteria. They use these smells to quickly identify each other. The smell is kind of like the animal's hash. But of course the analogy isn't perfect, for example similar mixes of bacteria may produce similar smells, which is not how hashes should behave. ~drummyfish }

It is good to know that we distinguish between "normal" hashes used for things such as indexing data and cryptographic hashes that are used in computer security and have to satisfy some stricter mathematical criteria. For the sake of simplicity we will sometimes ignore this distinction here. Just know it exists.

It is generally given that a hash (or hash function) should satisfy the following criteria:

Hashes are similar to checksums but are different: checksums are simpler because their only purpose is for checking data integrity, they don't have to have a chaotic behavior, uniform mapping and they are often easy to reverse. Hashes are also different from database IDs: IDs are just sequentially assigned numbers that aren't derived from the data itself, they don't satisfy the hash properties and they have to be absolutely unique. The term pseudohash may also be encountered, it seems to be used for values similar to true hashes which however don't quite satisfy the definition.

{ I wasn't able to find an exact definition of pseudohash, but I've used the term myself e.g. when I needed a function to make a string into a corresponding fixed length string ID: I took the first N characters of the string and appended M characters representing some characteristic of the original string such as its length or checksum -- this is what I called the string's pseudohash. ~drummyfish }

Some common uses of hashes are:


Let's say we want a hash function for string which for any ASCII string will output a 32 bit hash. How to do this? We need to make sure that every character of the string will affect the resulting hash.

First thought that may come to mind could be for example to multiply the ASCII values of all the characters in the string. However there are at least two mistakes in this: firstly short strings will result in small values as we'll get a product of fewer numbers (so similar strings such as "A" and "B" will give similar hashes, which we don't want). Secondly reordering the characters in a string (i.e. its permutations) will not change the hash at all (as with multiplication order is insignificant)! These violate the properties we want in a hash function. If we used this function to implement a hash table and then tried to store strings such as "abc", "bca" and "cab", all would map to the same hash and cause collisions that would negate the benefits of a hash table.

A better hash function for strings is shown in the section below.

Nice Hashes

{ Reminder: I make sure everything on this Wiki is pretty copy-paste safe, from the code I find on the Internet I only copy extremely short (probably uncopyrightable) snippets of public domain (or at least free) code and additionally also reformat and change them a bit, so don't be afraid of the snippets. ~drummyfish }

Here is a simple and pretty nice 8bit hash, it outputs all possible values and all its bits look quite random: { Made by me. ~drummyfish }

uint8_t hash(uint8_t n)
  n *= 23;
  n = ((n >> 4) | (n << 4)) * 11;
  n = ((n >> 1) | (n << 7)) * 9;

  return n;

The hash prospector project (unlicense) created a way for automatic generation of integer hash functions with nice statistical properties which work by XORing the input value with a bit-shift of itself, then multiplying it by a constant and repeating this a few times. The functions are of the format:

uint32_t hash(uint32_t n)
  n = A * (n ^ (n >> S1));
  n = B * (n ^ (n >> S2));
  return n ^ (n >> S3);

Where A, B, S1, S2 and S3 are constants specific to each function. Some nice constants found by the project are:

A B S1 S2 S3
303484085 985455785 15 15 15
88290731 342730379 16 15 16
2626628917 1561544373 16 15 17
3699747495 1717085643 16 15 15

The project also explores 16 bit hashes, here is a nice hash that doesn't even use multiplication!

uint16_t hash(uint16_t n)
  n = n + (n << 7); 
  n = n ^ (n >> 8);
  n = n + (n << 3); 
  n = n ^ (n >> 2);
  n = n + (n << 4);
  return n ^ (n >> 8);

Here is a nice string hash, works even for short strings, all bits look pretty random: { Made by me. ~drummyfish }

uint32_t strHash(const char *s)
  uint32_t r = 21;

  while (*s)
    r = (r * 31) + *s;

  r = r * 4451;
  r = ((r << 19) | (r >> 13)) * 5059;

  return r;

TODO: more


Hero Culture

Hero culture is a harmful culture of creating and worshiping heroes which leads to e.g. creation of cults of personality, strengthening fight culture and establishing hierarchical, anti-anarchist society of "winners" and "losers". The concept of a hero is one that arose in context of wars and other many times violent conflicts; a hero is different from a mere authority in some area, it is someone who creates fear of disagreement and whose image is distorted to a much more positive, sometimes godlike state, by which he distorts truth and is given a certain power over others. Therefore we highly warn about falling to the trap of hero culture, though this is very difficult in current highly hierarchical society. To us, the word hero has a pejorative meaning. Our advice is always this:

Do NOT create heroes. Follow ideas, not people. And similarly: hate ideas, not people.

Smart people know this and those being named heroes themselves many times protest it, e.g. Marie Curie has famously stated: "be less curious about people and more curious about ideas." Anarchists purposefully don't name theories after their inventors but rather by their principles, knowing that people are imperfect, they carry distorting associations and their images are twisted by history and politics. Abusive regimes are the ones who use heroes and their names for propaganda -- Stalinism, Leninism, corporations such as Ford, named after their founder etc. Heroes become brands whose stamp of approval is used to push bad ideas... especially popular are heroes who are already dead and can't protest their image being abused -- see for example how Einstein's image has been raped by capitalists for their own propaganda, e.g. by Apple's marketing, while in fact Einstein was a pacifist socialist. This is not to say an idea's name cannot be abused, the word communism has for example become something akin a swear word after being abused by regimes that had little to do with real communism. Nevertheless it is still much better to focus on ideas as ideas always carry their own principle embedded within them, visible to anyone willing to look. Focusing on ideas allows us to discuss them critically, it allows us to reject a bad concept without "attacking" the human who came up with it.



HEROES ARE HARMFUL. See hero culture.




Some hexadecimal values that are also English words at the same time and which you may include in your programs for fun include: ace, add, babe, bad, be, bee, beef, cab, cafe, dad, dead, deaf, decade, facade, face, fee, feed.




{ There are probably errors, you can send me an email if you find some. ~drummyfish }

This is a brief summary of history of technology and computers.

The earliest known appearance of technology related to humans is the use of stone tools of hominids in Africa some two and a half million years ago. Learning to start and control fire was one of the most important advances of earliest humans; this probably happened hundreds of thousands to millions years ago, even before modern humans. Around 8000 BC the Agricultural Revolution happened: this was a disaster -- as humans domesticated animals and plants, they had to abandon the comfortable life of hunters and gatherers and started to suffer greatly from the extremely hard work on their fields (this can be seen e.g. from their bones). This led to the establishment of first cities. Primitive writing can be traced to about 7000 BC to China. Wheel was another extremely useful technology humans invented, it is not known exactly when or where it appeared, but it might have been some time after 5000 BC -- in Ancient Egypt The Great Pyramid was built around 2570 BC still without the knowledge of wheel. Around 4000 BC history starts with first written records. Humans learned to smelt and use metals approximately 3300 BC (Bronze Age) and 1200 BC (Iron Age). Abacus, one of the simplest devices aiding with computation, was invented roughly around 2500 BC. However people used primitive computation helping tools, such as bone ribs, probably almost from the time they started trading. Babylonians in around 2000 BC were already able to solve some forms of quadratic equations.

After 600 BC the Ancient Greek philosophy starts to develop which would lead to strengthening of rational, scientific thinking and advancement of logic and mathematics. Around 300 BC Euklid wrote his famous Elements, a mathematical work that proves theorems from basic axioms. Around 400 BC camera obscura was already described in a written text from China where gears also seem to have been invented soon after. Ancient Greeks could communicate over great distances using Phryctoria, chains of fire towers placed on mountains that forwarded messages to one another using light. 234 BC Archimedes described the famous Archimedes screw and created an algorithm for computing the number pi. In 2nd century BC the Antikythera mechanism, the first known analog computer is made to predict movement of heavenly bodies. Romans are known to have been great builders, they built many roads and such structures as the Pantheon (126 AD) and aqueducts with the use of their own type of concrete and advanced understanding of physics.

Around 50 AD Heron of Alexandria, an Egyptian mathematician, created a number of highly sophisticated inventions such as a vending machine that accepted coins and gave out holy water, and a cart that could be "programmed" with strings to drive on its own.

In the 3rd century Chinese mathematician Liu Hui describes operations with negative numbers, even though negative numbers have already appeared before. In 600s AD an Indian astronomer Brahmagupta first used the number zero in a systematic way, even though hints on the number zero without deeper understanding of it appeared much earlier. In 9th century the Mayan empire is collapsing, though it would somewhat recover and reshape.

Around the year of our Lord 1450 a major technological leap known as the Printing Revolution occurred. Johannes Gutenberg, a German goldsmith, perfected the process of producing books in large quantities with the movable type press. This made books cheap to publish and buy and contributed to fast spread of information and better education. Around this time the Great Wall of China is being built.

They year 1492 marks the discovery of America by Christopher Columbus who sailed over the Atlantic Ocean, though he probably wasn't the first in history to do so, and it wasn't realized he sailed to America before his death.

During 1700s a major shift in civilization occurred, called the Industrial Revolution -- this was another disaster that would lead to the transformation of common people to factory slaves and loss of their self sufficiency. The revolution spanned roughly from 1750 to 1850. It was a process of rapid change in the whole society due to new technological inventions that also led to big changes in how people lived their everyday lives. It started in Great Britain but quickly spread over the whole world. One of the main changes was the transition from manual manufacturing to factory manufacturing using machines and sources of energy such as coal. Steam engine played a key role. Work became a form of a highly organized slavery system, society became industrionalized. This revolution became highly criticized as it unfortunately opened the door for capitalism, made people dependent on the system as everyone had to become a specialized cog in the society machine, at this time people started to measure time in minutes and lead very planned lives with less joy. But there was no way back.

In 1712 Thomas Newcomen invented the first widely used steam engine used mostly for pumping water, even though steam powered machines have already been invented long time ago. The engine was significantly improved by James Watt in 1776. Around 1770 Nicolas-Joseph Cugnot created a first somewhat working steam-powered car. In 1784 William Murdoch built a small prototype of a steam locomotive which would be perfected over the following decades, leading to a transportation revolution; people would be able to travel far away for work, the world would become smaller which would be the start of globalization. The railway system would make common people measure time with minute precision.

In 1792 Clause Chappe invented optical telegraph, also called semaphore. The system consisted of towers spaced up to by 32 km which forwarded textual messages by arranging big arms on top of the towers to signal specific letters. With this messages between Paris and Strasbourg, i.e. almost 500 km, could be transferred in under half an hour. The system was reserved for the government, however in 1834 it was hacked by two bankers who bribed the tower operators to transmit information about stock market along with the main message (by setting specific positions of arms that otherwise didn't carry any meaning), so that they could get an advantage on the market.

By 1800 Alessandro Volta invented an electric battery. In 1827 André-Marie Ampère publishes a further work shedding light on electromagnetism. After this electric telegraph would be worked on and improved by several people and eventually made to work in practice. In 1821 Michael Faraday invented the electromotor. Georg Ohm and especially James Maxwell would subsequently push the knowledge of electricity even further.

In 1822 Charles Babbage, a great English mathematician, completed the first version of a manually powered digital mechanical computer called the Difference Engine to help with the computation of polynomial derivatives to create mathematical tables used e.g. in navigation. It was met with success and further development was funded by the government, however difficulties of the construction led to never finishing the whole project. In 1837 Babbage designed a new machine, this time a Turing complete general purpose computer, i.e. allowing for programming with branches and loops, a true marvel of technology. It also ended up not being built completely, but it showed a lot about what computers would be, e.g. it had an assembly-like programming language, memory etc. For this computer Ada Lovelace would famously write the Bernoulli number algorithm.

In 1826 or 1827 French inventor Nicéphore Niépce captured first photography that survived until today -- a view from his estate named Le Gras. About an 8 hour exposure was used (some say it may have taken several days). He used a camera obscura and asphalt plate that hardened where the light was shining. Earlier cases of photography existed maybe as early as 1717, but they were only short lived.

Sound recording with phonatograph was invented in 1857 in Paris, however it could not be played back at the time -- the first record of human voice made with this technology can nowadays be reconstructed and played back. It wouldn't be until 1878 when people could both record and play back sounds with Edison's improvement of phonatograph. A year later, in 1879, Edison also patented the light bulb, even though he didn't invent it -- there were at least 20 people who created a light bulb before him.

Around 1888 so called war of the currents was taking place; it was a heated battle between companies and inventors for whether the alternating or direct current would become the standard for distribution of electric energy. The main actors were Thomas Edison, a famous iventor and a huge capitalist dick rooting for DC, and George Westinghouse, the promoter of AC. Edison and his friends used false claims and even killing of animals to show that AC was wrong and dangerous, however AC was objectively better, e.g. by its efficiency thanks to using high voltage, and so it ended up winning the war. AC was also supported by the famous genius inventor Nikola Tesla who during these times contributed hugely to electric engineering, he e.g. invented an AC motor and Tesla coil and created a system for wireless transmission of electric power.

Also in 1888 probably the first video that survived until today was recorded by Lou Le Prince in Northern England, with a single lens camera. It is a nearly 2 second silent black and white shot of people walking in a garden.

1895 can roughly be seen as the year of invention of radio, specifically wireless telegraph, by Italian engineer and inventor Guglielmo Marconi. He built on top of work of others such as Hertz and Tesla and created a device with which he was able to wirelessly ring a bell at a distance over 2 km.

On December 17 1903 the Wright brothers famously performed the first controlled flight of a motor airplane which they built, in North Carolina. In repeated attempts they flew as far as 61 meters over just a few seconds.

Around 1915 Albert Einstein, a German physicist, completed his General Theory of Relativity, a groundbreaking physics theory that describes the fundamental nature of space and time and gives so far the best description of the Universe since Newton. This would shake the world of science as well as popular culture and would enable advanced technology including nuclear energy, space satellites, high speed computers and many others.

Int 1907 Lee De Forest invented a practically usable vacuum tube, an extremely important part usable in electric devices for example as an amplifier or a switch -- this would enable construction of radios, telephones and later even primitive computers. The invention would lead to the electronic revolution.

In 1924 about 50% of US households own a car.

October 22 1925 has seen the invention of transistor by Julius Lilienfeld (Austria-Hungary), a component that would replace vacuum tubes thanks to its better properties, and which would become probably the most essential part of computers. At the time the invention didn't see much attention, it would only become relevant decades later.

In 1931 Kurt Gödel, a genius mathematician and logician from Austria-Hunagry (nowadays Czech Republic), published revolutionary papers with his incompleteness theorems which proved that, simply put, mathematics has fundamental limits and "can't prove everything". This led to Alan Turing's publications in 1936 that nowadays stand as the foundations of computer science -- he introduced a theoretical computer called the Turing machine and with it he proved that computers, no matter how powerful, will never be able to "compute everything". Turing also predicted the importance of computers in the future and has created several algorithms for future computers (such as a chess playing program).

In 1938 Konrad Zuse, a German engineer, constructed Z1, the first working electric mechanical digital partially programmable computer in his parents' house. It weighted about a ton and wasn't very reliable, but brought huge innovation nevertheless. It was programmed with punched film tapes, however programming was limited, it was NOT Turing complete and there were only 8 instructions. Z1 ran on a frequency of 1 to 4 Hz and most operations took several clock cycles. It had a 16 word memory and worked with floating point numbers. The original computer was destroyed during the war but it was rebuilt and nowadays can be seen in a Berlin museum.

In hacker culture the period between 1943 (start of building of the ENIAC computer) to about 1955-1960 is known as the Stone Age of computers -- as the Jargon File puts it, the age when electromechanical dinosaurs ruled the Earth.

In 1945 the construction of the first electronic digital fully programmable computer was completed at University of Pennsylvania as the US Army project. It was named ENIAC (Electronic Numerical Integrator and Computer). It used 18000 vacuum tubes and 15000 relays, weighted 27 tons and ran on the frequency of 5 KHz. Punch cards were used to program the computer in its machine language; it was Turing complete, i.e. allowed using branches and loops. ENIAC worked with signed ten digit decimal numbers.

Among hackers the period between 1961 to 1971 is known as the Iron Age of computers. The period spans time since the first minicomputer (PDP1) to the first microprocessor (Intel 4004). This would be followed by so called elder days.

On July 20 1969 first men landed on the Moon (Neil Armstrong and Edwin Aldrin) during the USA Apollo 11 mission. This tremendous achievement is very much attributed to the cold war in which USA and Soviet Union raced in space exploration. The landing was achieved with the help of a relatively simple on-board computer: Apollo Guidance Computer clocked at 2 MHz, had 4 KiB of RAM and about 70 KB ROM. The assembly source code of its software is nowadays available online.

Shortly after, on 29 October 1969, another historical event would happen that could be seen as the start of perhaps the greatest technological revolution yet, the start of the Internet. The first letter, "L", was sent over a long distance via ARPANET, a new experimental computer packet switching network without a central node developed by US defense department (they intended to send "LOGIN" but the system crashed). The network would start to grow and gain new nodes, at first mostly universities. The network would become the Internet.

1st January 1970 is nowadays set as the start of the Unix epoch. It is the date from which Unix time is counted. During this time the Unix operating system, one of the most influential operating systems was being developed at Bell Labs, mainly by Ken Thompson and Dennis Ritchie. Along the way they developed the famous Unix philosophy and also the C programming language, perhaps the most influential programming language in history. Unix and C would shape the technology far into the future, a whole family of operating systems called Unix-like would be developed and regarded as the best operating systems thanks to their minimalist design.

By 1977 ARPANET had about 60 nodes.

August 12 1981 would see the released of IBM PC, a personal computer based on open, modular architecture that would immediately be very successful and would become the de-facto standard of personal computers. IBM PC was the first of the kind of desktop computers we have today. It had 4.77 MHz Intel 8088 CPU, 16 kB of RAM and used 5.25" floppy disks.

In 1983 Richard Stallman announced his GNU project and invented free (as in freedom) software, a kind of software that is freely shared and developed by the people so as to respect the users' freedom. This kind of ethical software stands opposed to the proprietary corporate software, it would lead to creation of some of the most important software and to a whole revolution in software development and its licensing, it would spark the creation of other movements striving for keeping ethics in the information age.

1985: on November 20 the first version of the Windows operating system was sadly released by Microsoft. These systems would become the mainstream desktop operating systems despite their horrible design and they would unfortunately establish so called Windows philosophy that would irreversibly corrupt other mainstream technology. Also in 1985 one of the deadliest software bugs appeared: that in Therac-25, a medical radiotherapy device which fatally overdosed several patients with radiation.

On April 26 1986 the Chernobyl nuclear disaster happened (the worst power plant accident in history) -- in north Ukraine (at the time under USSR) a nuclear power plant exploded, contaminated a huge area with radioactivity and released a toxic radioactive cloud that would spread over Europe -- many would die either directly or indirectly (many years later due to radioactivity poisoning, estimated at many thousands). The Chernobyl area would be sealed in the 30 km radius. It is estimated the area won't be habitable again for several thousands of years.

Around this time Internet is not yet mainstream but it is, along with similar local networks, working and has active communities -- there is no world wide web yet but people are using Usenet and BBSes for "online" discussions with complete strangers and developing early "online cultures".

At the beginning of 1991 Tim Berners-Lee created the World Wide Web, a network of interlinked pages on the Internet. This marks another huge step in the Internet revolution, the Web would become the primary Internet service and the greatest software platform for publishing any kind of information faster and cheaper than ever before. It is what would popularize the Internet and bring it to the masses.

Shortly after the Soviet Union dissolved and on 25 August 1991 Linus Torvalds announced Linux, his project for a completely free as in freedom Unix-like operating system kernel. Linux would become part of GNU and later one of the biggest and most successful software projects in history. It would end up powering Internet servers and supercomputers as well as desktop computers of a great number of users. Linux proved that free software works and surpasses proprietary systems.

After this very recent history follows, it's hard to judge which recent events will be of historical significance much later. 1990s have seen a huge growth of computer power, video games such as Doom led to development of GPUs and high quality computer graphics along with a wide adoption of computers by common people, which in turn helped the further growth of Internet. During the 90s we've also seen the rise of the open source movement. Shortly after 2000 Lawrence Lessig founded Creative Commons, an organization that came hand in hand with the free culture movement inspired by the free software movement. At this point over 50% of US households had a computer. Cell phones became a commonly owned item and after about 2005 so called "smart phones" and other "smart" devices replaced them as a universal communication device capable of connecting to the Internet. Before 2020 we've seen a huge advancement in neural network Artificial Intelligence which will likely be the topic of the future. Quantum computers are being highly researched with already existing primitive prototypes; this will also likely be very important in the following years. Besides AI there has appeared a great interest and development of virtual reality, drones, electromobiles, robotic Mars exploration and others. However the society and technology has generally seen a decadence after 2010, capitalism has pushed technology to become hostile and highly abusive to users, extreme bloat of technology causes highly inefficient, extremely expensive and unreliable technology. In addition society is dealing with a lot of serious issues such as the global warming and many people are foreseeing a collapse of society.

Recent History Of Technology

TODO: more detailed history since the start of Unix time


Holy War

Holy war is a perpetual passionate argument over usually two possible choices. This separates people into almost religious teams that sometimes argue to death about details such as what name something should be given, very much resembling traditional disagreements between religions and their churches. In holy wars people tend to defend whichever side they stand on to the death and can get emotional when discussing the topic. Some examples of holy wars are (in brackets indicated the side taken by LRS):

Things like cats vs dogs or sci-fi vs fantasy may or may not be a holy war, there is a bit of a doubt in the fact that one can easily like both and/or not be such a diehard fan of one or the other. A subject of holy war probably has to be something that doesn't allow too much of this.


How To


{ Don't hesitate to contact me. ~drummyfish }

Are you tired of bloat and can't stand shitty software like Windows anymore? Do you want to kill yourself? Do you hate capitalism? Do you also hate the fascist alternatives you're being offered? Do you just want to create a genuinely good bullshitless technology that would help all people? Do you just want to share knowledge freely without censorship? You have come to the right place.

Firstly let us welcome you, no matter who you are, no matter your political opinions, your past and your skills, color or shape of your genitalia, we are glad to have you here. Remember, you don't have to be a programmer to help and enjoy LRS. LRS is a lifestyle, a philosophy. Whether you are a programmer, artist, educator or just someone passing by, you are welcome, you may enjoy our culture and its fruit and if you want, you can help enrich it.


Here are some extremely basic steps to take regarding technology and the technological aspect of LRS:

Would you like to create LRS but don't have enough spare time/money to make this possible? You can check out making living with LRS.

How to Live, Dos and Don'ts

This is a summary of some main guidelines on how an LRS supporter should behave in general so as to stay consistent with LRS philosophy, however it is important that this is shouldn't be taken as rules to be blindly followed -- the last thing we want is a religion of brainwashed NPCs who blindly follow orders. One has to understand why these principles are in place and even potentially modify them.

How To Live




Hardware (HW), as opposed to software, are the physical parts of a computer, i.e. the circuits, the mouse, keyboard, the printer etc. Anything you can smash when the machine pisses you off.



WARNING: brain exploding article


{ This article contains unoriginal research with errors and TODOs, read at own risk. ~drummyfish }

Hyperoperations are mathematical operations that are generalizations/continuations of the basic arithmetic operations of addition, multiplication, exponentiation etc. Basically they're like the basic operations like plus but on steroids. When we realize that multiplication is just repeated addition and exponentiation is just repeated multiplication, it is possible to continue in the same spirit and keep inventing new operations by simply saying that a new operation means repeating the previously defined operation, so we define repeated exponentiation, which we call tetration, then we define repeated tetration, which we call pentation, etc.

There are infinitely many hyperoperations as we can go on and on in defining new operations, however we start with what seems to be the simplest operation we can think of: the successor operation (we may call it succ, +1, ++, next, increment, zeration or similarly). In the context of hyperoperations we call this operation hyper0. Successor is a unary operator, i.e. it takes just one number and returns the number immediately after it (suppose we're working with natural numbers). In this successor is a bit special because all the higher operations we are going to define will be binary (taking two numbers). After successor we define the next operation, addition (hyper1), or a + b, as repeatedly applying the successor operation b times on number a. After this we define multiplication (hyper2), or a * b, as a chain of b numbers as which we add together. Similarly we then define exponentiation (hyper3, or raising a to the power of b). Next we define tetration (hyper4, building so called power towers), pentation (hyper5), hexation (hyper6) and so on (heptation, octation, ...).

Indeed the numbers obtained by high order hyperoperations grow quickly as fuck.

An important note is this: there are multiple ways to define the hyperoperations, the most common one seems to be by supposing the right associative evaluation, which is what we're going to implicitly consider from now on. This means that once associativity starts to matter, we will be evaluating the expression chains FROM RIGHT, which may give different results than evaluating them from left (consider e.g. 2^(2^3) != (2^2)^3). The names tetration, pentation etc. are reserved for right associativity operations.

The following is a sum-up of the basic hyperoperations as they are commonly defined (note that many different symbols are used for these operations throughout literature, often e.g. up arrows are used to denote them):

operation symbol meaning commutative associative
successor (hyper0) succ(a) next after a
addition (hyper1) a + b succ(succ(succ(...a...))), b succs yes yes
multiplication (hyper2) a * b 0 + (a + a + a + ...), b as in brackets yes yes
exponentiation (hyper3) a ^ b 1 * (a * a * a * ...), b as in brackets no no
tetration (hyper4) a ^^ b 1 * (a ^ (a ^ (a ^ (...), b as in brackets no no
pentation (hyper5) a ^^^ b 1 * (a^^ (a^^ (a^^ (...), b as in brackets no no
hexation (hyper6) a ^^^^ b 1 * (a^^^(a^^^(a^^^(...), b as in brackets no no
... no more no more

The following ASCII masterpiece shows the number 2 in the territory of these hyperoperations:

 2    +1    +1    +1    +1    +1    +1    +1  ...     successor
 |        __/   ________/           /       9
 |       /     /     ______________/
 |      /     /     /
 2  +  2  +  2  +  2  +  2  +  2  +  2  +  2  ...     addition
 |     |4       __/                       / 16
 |     |       /     ____________________/
 |     |      /     /
 2  *  2  *  2  *  2  *  2  *  2  *  2  *  2  ...     multiplication
 |     |4     8 __/ 16    32    64    128   256           
 |     |       /     
 |     |      /     
 2  ^ (2  ^ (2  ^ (2  ^ (2  ^ (2  ^ (2  ^ (2  ...     exponentiation
 |     |4     16__/ 65536 ~10^19000
 |     |       /             not sure about arrows here, numbers get too big, TODO
 |     |      /
 2  ^^(2  ^^(2  ^^(2  ^^(2  ^^(2  ^^(2  ^^(2  ...     tetration
 |     |4    |65536 
 |     |     |         not sure about arrows here either
 |     |     |
 2 ^^^(2 ^^^(2 ^^^(2 ^^^(2 ^^^(2 ^^^(2 ^^^(2  ...     pentation
 ...    4     65536                         a lot

Some things generally hold about hyperoperations, for example for any operation f = hyperN where N >= 3 and any number x it is true that f(1,x) = 1 (just as raising 1 to anything gives 1).

Hyperroot is the generalization of square root, i.e. for example for tetration the nth hyperroot of number a is such number x that tetration(x,n) = a.

Left associativity hyperoperations: Alternatively left association can be considered for defining hyperoperations which gives different operations. Here is the same picture as above, but for left associativity -- we see the numbers don't grow THAT quickly (but still pretty quickly).

 2    +1    +1    +1    +1    +1    +1    +1  ...     successor
 |        __/   ________/           /       9
 |       /     /     ______________/
 |      /     /     /
 2  +  2  +  2  +  2  +  2  +  2  +  2  +  2  ...     addition
 |     |4       __/                       / 16
 |     |       /     ____________________/
 |     |      /     /
 2  *  2  *  2  *  2  *  2  *  2  *  2  *  2  ...     multiplication
 |     |4       __/ 16    32    64    128 / 256           
 |     |       /     ____________________/
 |     |      /     /
(2  ^  2) ^  2) ^  2) ^  2) ^  2) ^  2) ^  2  ...     left exponentiation
 |     |4     16__/ 256   65536             ~3*10^38
 |     |       /     ____________________________
 |     |      /     /
(2  ^^ 2) ^^ 2) ^^ 2) ^^ 2) ^^ 2) ^^ 2) ^^ 2  ...     left tetration
 |     |4     256   2^1048576   
 |     |                        TODO: arrows?
 |     |
(2 ^^^ 2)^^^ 2)^^^ 2)^^^ 2)^^^ 2)^^^ 2)^^^ 2  ...     left pentation
 ...    4     ~3*10^38

In fact we may choose to randomly combine left and right associativity to get all kinds of weird hyperoperations. For example we may define tetration with right associativity but then use left associativity for the next operation above it (we could call it e.g. "right-left pentation"), so in fact we get a binary tree of hyperoperations here (as shown by M. Muller in his paper on this topic).


Here's a C implementation of some hyperoperations including a general hyperN operation and an option to set left or right associativity (however note that even with 64 bit ints numbers overflow very quickly here):

#include <stdio.h>
#include <inttypes.h>
#include <stdint.h>

#define ASSOC_R 1 // right associativity?

// hyper0
uint64_t succ(uint64_t a)
  return a + 1;

// hyper1
uint64_t add(uint64_t a, uint64_t b)
  for (uint64_t i = 0; i < b; ++i)
    a = succ(a);

  return a;
  // return a + b

// hyper2
uint64_t multiply(uint64_t a, uint64_t b)
  uint64_t result = 0;

  for (uint64_t i = 0; i < b; ++i)
    result += a;

  return result;
  // return a * b

// hyper(n + 1) for n > 2
uint64_t nextOperation(uint64_t a, uint64_t b, uint64_t (*operation)(uint64_t,uint64_t))
  if (b == 0)
    return 1;

  uint64_t result = a;

  for (uint64_t i = 0; i < b - 1; ++i)
    result = 

  return result;

// hyper3
uint64_t exponentiate(uint64_t a, uint64_t b)
  return nextOperation(a,b,multiply);

// hyper4
uint64_t tetrate(uint64_t a, uint64_t b)
  return nextOperation(a,b,exponentiate);

// hyper5
uint64_t pentate(uint64_t a, uint64_t b)
  return nextOperation(a,b,tetrate);

// hyper6
uint64_t hexate(uint64_t a, uint64_t b)
  return nextOperation(a,b,pentate);

// hyper(n)
uint64_t hyperN(uint64_t a, uint64_t b, uint8_t n)
  switch (n)
    case 0: return succ(a); break;
    case 1: return add(a,b); break;
    case 2: return multiply(a,b); break;
    case 3: return exponentiate(a,b); break;
    default: break;

  if (b == 0)
    return 1;

  uint64_t result = a;

  for (uint64_t i = 0; i < b - 1; ++i)
    result = hyperN(
      ,n - 1);

  return result;

int main(void)

  for (uint64_t b = 0; b < 4; ++b)
    printf("%" PRIu64 "\t",b);

    for (uint64_t a = 0; a < 4; ++a)
      printf("%" PRIu64 "\t",tetrate(a,b));


  return 0;

In this form the code prints a table for right associativity tetration:

        0       1       2       3
0       1       1       1       1
1       0       1       2       3
2       1       1       4       27
3       0       1       16      7625597484987



Information wants to be free.

Information is knowledge that can be used for making decisions. Information is interpreted data, i.e. while data itself may not give us any information, e.g. if they're encrypted and we don't know the key or if we simply don't know what the data signifies or implies, information emerges once we make sense of the data. Information is contained e.g in books, on the Internet, in nature, and we access it through our senses. Computers can be seen as machines for processing information and since the computer revolution information has become the focus of our society; we often encounter terms such as information technology, informatics, information war, information age etc. Information theory is a scientific field studying information.

Information wants to be free, i.e. it is free naturally unless we decide to limit its spread with shit like intellectual property laws. What does "free" mean? It is the miraculous property of information that allows us to duplicate it basically without any cost. Once we have certain information, we may share it with others without having to give up our own knowledge of the information. A file on a computer can be copied to another computer without deleting the file on the original computer. This is unlike with physical products which if we give to someone, we lose them ourselves. Imagine if you could make a piece of bread and then duplicate it infinitely for the whole world -- information works like this! We see it as a crime to want to restrict such a miracle. We may also very nicely store information in our heads. For all this information is beautiful. It is sometimes discussed whether information is created or discovered -- if a mathematician comes up with an equation, is it his creation or simply his discovery of something that belongs to the nature and that has always been there? This question isn't so important because whatever terms we use, we at LRS decide to create, spread and freely share information without limiting it in any way, i.e. neither discovery nor invention should give rise to any kind of property.

In computer science the basic unit of information amount is 1 bit (for binary digit), also known as shannon. It represents a choice of two possible options, for example an answer to a yes/no question (with each answer being equally likely), or one of two binary digits: 0 or 1. From this we derive higher units such as bytes (8 bits), kilobytes (1000 bytes) etc. Other units of information include nat or hart. With enough bits we can encode any information including text, sounds and images. For this we invent various formats and encodings with different properties: some encodings may for example contain redundancy to ensure the encoded information is preserved even if the data is partially lost. Some encodings may try to hide the contained information (see encryption, obfuscation, steganography). For processing information we create algorithms which we usually execute with computers. We may store information (contained in data) in physical media such as books, computer memory or computer storage media such as CDs, or even with traditional potentially analog media such as photographs.

Keep in mind that the amount of physically present bits doesn't have to equal the amount of information because, as mentioned above, data that takes N bits may e.g. utilize redundancy and so store less information that would theoretically be possible with N bits. It may happen that the stored bits are correlated for any reason or different binary values convey the same information (e.g. in some number encodings there are two values for number zero: positive and negative). All this means that the amount of information we receive in N bit data may be lower (but never higher) than N bits.

Information is related to information entropy (also Shannon entropy, similar to but distinct from the concept of thermodynamic entropy in physics); they're both measured in same units (usually bits) but entropy measures a kind of "uncertainty" or average information received from a certain event when we know its probability distribution -- in a sense information and entropy can be seen as opposites: before we receive information we lack the information but there exists entropy, once we receive the information there is information but no entropy.

In signal theory information is also often used as a synonym for signal, however a distinction can be made: signal is the function that carries information. Here we also encounter the term noise which means an unwanted signal mixed in with the desired signal which may make it harder to extract the information carried by the signal, or even obscure some or all of the information so that it can't be retrieved.

According to the theory of relativity information can never travel faster than light -- even if some things may move faster than light, such as a shadow, so called "spooky action at a distance" (usually associated with quantum entanglement) or even matter due to the expansion of space, by our best knowledge we can never use this to transfer information faster than light. For this it seems our communication technology will always be burdened by lag, no matter how sophisticated.


"Intellectual Property"

"Intellectual property" (IP, not to be confused with IP address) is a toxic capitalist idea that says that people should be able to own information (such as ideas, presentation style, songs or text) and that it should be treated in ways very similar to physical property. For example patents are one type of intellectual property which allow an inventor of some idea to own that idea and be able to limit its use and charge money to people using that idea, or prevent people from using that idea altogether. Copyright is probably the most harmful of IP today, and along with patents the most relevant one in the area of technology. However, IP encompasses many other subtypes of this kind of "property" such as trademarks, trade dress, plant varieties etc. IP is an arbitrarily invented grant of monopoly on information, i.e. something that is otherwise naturally free.

Most people with brain oppose this idea, see e.g. http://harmful.cat-v.org/economics/intellectual_property/.

IP exists to benefit corporations, it artificially limits the natural freedom of information (see artificial scarcity) and tries to eliminate freedom and competition, it fuels consumerism (for example a company can force deletion of old version of its program in order to force users to buy the new version), it helps keep malicious features in programs (by forbidding any study and modifications) and forces reinventing wheels which is extremely energy and resource wasting. Without IP, everyone would be able to study, share, improve and remix and combine existing technology and art.

Many people protest against the idea of IP -- either wanting to abandon the idea completely, as we do, or at least arguing for great relaxation the insanely strict and aggressive forms that destroy our society. Movements such as free software and free culture have come into existence in protest of IP laws. Of course, capitalists don't give a shit. It can be expected the IP cancer will be reaching even more extreme forms very soon, for example it will be perpetual and encompassing such things as mere though (thoughts will be monitored and people will be charged for thinking about ideas owned by corporations).

It must be noted that as of 2020 it is not possible to avoid the IP shenanigans. Even though we can eliminate most of the harmful stuff (for now) with licenses and waivers, there are many things that may be impossible to address or posing considerable dangers, e.g. trademark, personal rights or patent troll attacks. In some countries (US) it is illegal to make free programs that try to circumvent DRM. Some countries make it explicitly impossible to e.g. waive copyright. It is impossible to safely check whether your creation violates on someone else's IP. There exists shit such as moral rights that may exist even if copyright doesn't apply.



This is a great answer to anything, if someone tells you something you don't understand or something you think is shit and you don't know what to say, you just say "interesting".

All natural numbers are interesting: there is a fun proof by contradiction of this. Suppose there exists a set of uninteresting numbers which is a subset of natural numbers; then the smallest of these numbers is interesting by being the smallest uninteresting number -- we've arrived at contradiction, therefore a set of uninteresting numbers cannot exist.

TODO: just list some interesting shit here



Internet is the grand, decentralized global network of interconnected computer networks that allows advanced, cheap, practically instantaneous intercommunication of people and computers and sharing of large amounts of data and information. Over just a few decades since its birth in 1970s it changed the society tremendously, shifted it to the information age and stands as possibly the greatest technological invention of our society. It is a platform for many services and applications such as the web, e-mail, internet of things, torrents, phone calls, video streaming, multiplayer games etc. Of course, once Internet became accessible to normal people and has become the largest public forum on the planet, it has also become the biggest dump of retards in history.

Internet is built on top of protocols (such as IP, HTTP or SMTP), standards, organizations (such as ICANN, IANA or W3C) and infrastructure (undersea cables, satellites, routers, ...) that all together work to create a great network based on packet switching, i.e. a method of transferring digital data by breaking them down into small packets which independently travel to their destination (contrast this to circuit switching). The key feature of the Internet is its decentralization, i.e. the attribute of having no central node or authority so that it cannot easily be destroyed or taken control over -- this is by design, the Internet evolved from ARPANET, a project of the US defense department. Nevertheless there are parties constantly trying to seize at least partial control of the Internet such as governments (e.g. China and its Great Firewall, EU with its "anti-pedophile" chat monitoring laws etc.) and corporations (by creating centralized services such as social networks). Some are warning of possible de-globalization of the Internet that some parties are trying to carry out, which would turn the Internet into so called splinternet.

Access to the Internet is offered by ISPs (internet service providers) but it's pretty easy to connect to the Internet even for free, e.g. via free wifis in public places, or in libraries. By 2020 more than half of world's population had access to the Internet -- most people in the first world have practically constant, unlimited access to it via their smartphones, and even in poor countries capitalism makes these devices along with Internet access cheap as people constantly carrying around devices that display ads and spy on them is what allows their easy exploitation.

The following are some stats about the Internet as of 2022: there are over 5 billion users world-wide (more than half of them from Asia and mostly young people) and over 50 billion individual devices connected, about 2 billion websites (over 60% in English) on the web, hundreds of billions of emails are sent every day, average connection speed is 24 Mbps, there are over 370 million registered domain names (most popular TLD is .com), Google performs about 7 billion web searches daily (over 90% of all search engines).


see also history


See Also


Interplanetary Internet

Interplanetary Internet is at this time still a hypothetical extension of the Internet to multiple planets. As mankind is getting closer to starting living on other planets and bodies such as Mars and Moon, we have to start thinking about the challenges of creating a communication network between all of them. The greatest challenge is posed by the vast distances that increase the communication delay (which arises due to the limited speed of light) and make errors such as packet loss much more painful. Two-way communication (i.e. request-response) to Moon and Mars can take even 2 seconds and 40 minutes respectively. Also things like planet motions, eclipses etc. pose problems to solve.

We can see that e.g. real time Earth-Mars communication (e.g. chat or videocalls) are physically impossible, so not only do we have to create new network protocols that minimize the there-and-back communication (things such as handshakes are out of question) and implement great redundancy for reliable recovery from loss of data traveling through space, we also need to design new user interfaces and communication paradigms, i.e. we probably need to create a new messaging software for "interplanetary chat" that will for example show the earliest time at which the sender can expect an answer etc. Interesting shit to think about.

{ TFW no Xonotic deathmatches with our Moon friends :( ~drummyfish }

For things like Web, each planet would likely want to have its own "subweb" (distinguished e.g. by TLDs) and caches of other planets' webs for quick access. This way a man on Mars wouldn't have to wait 40 minutes for downloading a webpage from the Earh web but could immediately access that webpage's slightly delayed version, which is of course much better.

Research into this has already been ongoing for some time. InterPlaNet is a protocol developed by NASA and others to be the basis for interplanetary Internet.

See Also



Interpolation (inter = between, polio= polish) means computing (usually a gradual) transition between some specified values, i.e. creating additional intermediate points between some already existing points. For example if we want to change a screen pixel from one color to another in a gradual manner, we use some interpolation method to compute a number of intermediate colors which we then display in rapid succession; we say we interpolate between the two colors. Interpolation is a very basic mathematical tool that's commonly encountered almost everywhere, not just in programming: some uses include drawing a graph between measured data points, estimating function values in unknown regions, creating smooth animations, drawing vector curves, digital to analog conversion, enlarging pictures, blending transition in videos and so on. Interpolation can be used to generalize, e.g. if we have a mathematical function that's only defined for whole numbers (such as factorial or Fibonacci sequence), we may use interpolation to extend that function to all real numbers. Interpolation can also be used as a method of approximation (consider e.g. a game that runs at 60 FPS to look smooth but internally only computes its physics at 30 FPS and interpolates every other frame so as to increase performance). All in all interpolation is one of the most important things to learn.

The opposite of interpolation is extrapolation, an operation that's extending, creating points OUTSIDE given interval (while interpolation creates points INSIDE the interval). Both interpolation and extrapolation are similar to regression which tries to find a function of specified form that best fits given data (unlike interpolation it usually isn't required to hit the data points exactly but rather e.g. minimize some kind of distance to these points).

There are many methods of interpolation which differ in aspects such as complexity, number of dimensions, type and properties of the mathematical curve/surface (polynomial degree, continuity/smoothness of derivatives, ...) or number of points required for the computation (some methods require knowledge of more than two points).

      .----B           _B          _.B        _-'''B-.
      |              .'          .'         .'
      |           _-'           /          :
      |         .'            .'          /
 A----'       A'           A-'        _.A'

  nearest       linear       cosine         cubic

A few common 1D interpolation methods.

The base case of interpolation takes place in one dimension (imagine e.g. interpolating sound volume, a single number parameter). Here interpolation can be seen as a function that takes as its parameters the two values to interpolate between, A an B, and an interpolation parameter t, which takes the value from 0 to 1 -- this parameter says the percentage position between the two values, i.e. for t = 0 the function returns A, for t = 1 it returns B and for other values of t it returns some intermediate value (note that this value may in certain cases be outside the A-B interval, e.g. with cubic interpolation). The function can optionally take additional parameters, e.g. cubic interpolation requires to also specify slopes at the points A and B. So the function signature in C may look e.g. as

float interpolate(float a, float b, float t);

Many times we apply our interpolation not just to two points but to many points, by segments, i.e. we apply the interpolation between each two neighboring points (a segment) in a series of many points to create a longer curve through all the points. Here we are usually interested in how the segments transition into each other, i.e. what the whole curve looks like at the locations of the points.

Nearest neighbor is probably the simplest interpolation (so simple that it's sometimes not even called an interpolation, even though it technically is). This method simply returns the closest value, i.e. either A (for t < 0.5) or B (otherwise). This creates kind of sharp steps between the points, the function is not continuous, i.e. the transition between the points is not gradual but simply jumps from one value to the other at one point.

Linear interpolation (so called lerp) is probably the second simplest interpolation which steps from the first point towards the second in a constant step, creating a straight line between them. This is simple and good enough for many things, the function is continuous but not smooth, i.e. there are no "jumps" but there may be "sharp turns" at the points, the curve may look like a "saw".

Cosine interpolation uses part of the cosine function to create a continuous and smooth line between the points. The advantage over linear interpolation is the smoothness, i.e. there aren't "sharp turns" at the points, just as with the more advanced cubic interpolation against which cosine interpolation has the advantage of still requiring only the two interval points (A and B), however for the price of a disadvantage of always having the same horizontal slope at each point which may look weird in some situations (e.g. multiple points lying on the same sloped line will result in a curve that looks like smooth steps).

Cubic interpolation can be considered a bit more advanced, it uses a polynomial of degree 3 and creates a nice smooth curve through multiple points but requires knowledge of one additional point on each side of the interpolated interval (this may create slight issues with the first and last point of the sequence of values). This is so as to know at what slope to approach an endpoint so as to continue in the direction of the point behind it.

The above mentioned methods can be generalized to more dimensions (the number of dimensions are equal to the number of interpolation parameters) -- we encounter this a lot e.g. in computer graphics when upscaling textures (sometimes called texture filtering). 2D nearest neighbor interpolation creates "blocky" images in which pixels simply "get bigger" but stay sharp squares if we upscale the texture. Linear interpolation in 2D is called bilinear interpolation and is visually much better than nearest neighbor, bicubic interpolation is a generalization of cubic interpolation to 2D and is yet smoother that bilinear interpolation.

See Also


International Obfuscated C Code Contest

The International Obfuscated C Code Contest (IOCCC for short) is an annual online contest in making the most creatively obfuscated programs in C. It's kind of a "just for fun" thing but similarly to esoteric languages there's an element of art and clever hacking that carries a great value. While the productivity freaks will argue this is just a waste of time, the true programmer appreciates the depth of knowledge and creative thinking needed to develop a beautifully obfuscated program. The contest runs since 1984 and was started by Landon Curt Noll and Larry Bassel.

Unfortunately some shit is flying around IOCCC too, for example confusing licensing -- having a CC-BY-SA license in website footer and explicitly prohibiting commercial use in the text, WTF? Also the team started to use Microshit's GitHub. They also allow latest capitalist C standards, but hey, this is a contest focused on ugly C, so perhaps it makes sense.

Hacking the rules of the contest is also encouraged and there is an extra award for "worst abuse of the rules".

Some common ideas employed in the programs include:

And let us also mention a few winning entries:


Welcome to the Island!

This is the freedom island where we live! Feel free to build your house on any free spot. Planting trees and making landscape works are allowed too.

                          __X/    '-X_
    '-~-.           ____./  i   X     '-__       
               __.-'   /'  XX   i         \_      '-~-.
        ___,--' x  x_/'    Xi     O         '-_
    ___/       __-''   X  X(      i    x       '-._
 _-'                   i  i          [T]  xX  x    ''-._
(                          O      :      ixx            \
 '-                                \_                    )
   ''-__                             '.  ____      ____-'
        ''--___    [D]      ; x        \/    ''---' 
               ''--__         ;xX       \__
                     \          iX         ''-__
   '-~-.             /           i  O           '--__
                    |               i                \
           '-~-.     \__                              )
'-~-.                    ''--___                  ____/

D: drummyfish's house

T: The Temple, it has nice view of the sea and we go meditate here, it's a nice walk.


Jargon File

Jargon File (also Hacker's Dictionary) is a computer hacker dictionary/compendium that's been written and updated by a number of prominent hackers, such as Richard Stallman and Erik S Raymond, since 1970. It is a greatly important part of hacker culture and has also partly inspired this very wiki.

It informally states that it's in the public domain and some people have successfully published it commercially, however there is no standard waiver or license -- maybe because such waivers didn't really exist at the time it was started -- and so we have to suppose it is NOT formally free as in freedom. Nevertheless it is freely accessible e.g. at Project Gutenberg and no one will bother you if you share it around... we just wouldn't recommend treating it as true public domain.

It is pretty nicely written with great amount of humor and good old political incorrectness, you can e.g. find the definition of terms such as rape and clit mouse. Some other nice terms include notwork (non-functioning network), Internet Exploiter, binary four (giving a finger in binary) or Maggotbox (Macintosh). At the beginning the book gives some theory about how the hacker terms are formed (overgeneralization, comparatives etc.).



Unfortunately 3 billion devices run Java.

Java (not to be confused with JavaScript) is a highly bloated, inefficient, "programming language" that's sadly kind of popular. It is compiled to bytecode and therefore "platform independent" (as long as the platform has a lot of resources to waste on running Java virtual machine). Some of the features of Java include bloat, slow loading, slow running, supporting capitalism, forced and unavoidable object obsession and the necessity to create a billion of files to write even a simple program.

Avoid this shit.

{ I've met retards who seriously think Java is more portable than C lol. I wanna suicide myself. ~drummyfish }



JavaScript (not to be confused with completely unrelated Java language) is a bloated programming language used mainly on the web.


John Carmack

John Carmack is a brilliant legendary programmer that's contributed mostly to computer graphics and stands behind engines of such games as Doom, Wolfenstein and Quake. He helped pioneer real-time 3D graphics, created many hacks and algorithms (e.g. the reverse shadow volume algorithm). He is also a rocket engineer.

 | _  |

ASCII art of John Carmack

He's kind of the ridiculously stereotypical nerd with glasses that just from the way he talks gives out the impression of someone with high functioning autism. You can just sense his IQ is over 9000. Some nice shit about him can be read in the (sadly proprietary) book Masters of Doom.

Carmack is a proponent of FOSS and has released his old game engines as such which gave rise to an enormous amount of modifications, forked engines and even new games (e.g. Freedoom and Xonotic). He's probably leaning more towards the dark side of the source: the open-source. In 2021 Carmack tweeted that he would have rather licensed his old Id engines under a permissive BSD license than the GPL, which is good.

In 2013 he sadly sold his soul to Facebook to work on VR (in a Facebook owned company Oculus).



Here you can shitpost your jokes that are somehow related to this wiki's topic. Just watch out for copyright (no copy-pasting jokes from other sites)!

Please do NOT post lame "big-bang-theory"/9gag jokes like sudo make sandwich or there are 10 types of people.

{ Many of the jokes are original, some are shamelessly pulled from other sites and reworded. I don't believe copyright can apply if the expression of a joke is different, ideas can't be copyrighted. Also the exact origins of jokes are difficult to track so it's probably a kind of folklore. ~drummyfish }

See Also


Julia Set


| Julia Set for -0.34 - 0.63i       :.                              |
|                                ..':. ..                           |
|                                '':.:'''      ..       ..          |
|                                 :':::.. ''  ::.    .. :.'         |
|                                  '::::. :: :::. .   :.'': .       |
|                              ......':::.::.:: ...::.:::.::.. .    |
|                              :::::::::':'.':.::::::::':.::''::..  |
|                   .             '::::'':':::':'':::'  ::''  '     |
|                   ':.       .   .. ..::'::':::.   '   :'          |
|                 . :: :'     ::..::::::: ::: ':::..     '          |
|                   :'::::   '.:::::::::'.::::'  ''                 |
|                    .:::::' ':::::::::. ''::::'.                   |
|                  :. '::::'.::::::::::.  '::':.'                   |
|          . .   '':::. ::: ::::::::'::'    .::::                   |
|         :':.  ... ':::.:':::''  '  '        ''.                   |
|        ..::  .::::::...':.::::::.:                                |
|   :::...' '.::::::::'.: .:.:'::::'':                              |
|    '' :. : .:''':' :::'::':::.   ' '                              |
|         '::'': '' '::: ::'':::::                                  |
|          ::       ':.  '' '':::.:                                 |
|         ' '       '        ::.:.'.'                               |
|                              ::'                                  |
|                              '                                    |


The following code is a simple C program that renders given Julia set into terminal (for demonstrative purposes, it isn't efficient or do any antialiasing).

#include <stdio.h>

#define ROWS 30
#define COLS 70
#define SET_X -0.36 // Julia set parameter
#define SET_Y -0.62 // Julia set parameter
#define FROM_X -1.5
#define FROM_Y 1.0
#define STEP (3.0 / ((double) COLS))

unsigned int julia(double x, double y)
  double cx = x, cy = y, tmp;

  for (int i = 0; i < 1000; ++i)
    tmp = cx * cx - cy * cy + SET_X;
    cy = 2 * cx * cy + SET_Y;
    cx = tmp;

    if (cx * cx * cy * cy > 10000000000)
      return 0;

  return 1;

int main(void)
  double cx, cy = FROM_Y;

  for (int y = 0; y < ROWS; ++y)
    cx = FROM_X;

    for (int x = 0; x < COLS; ++x)
      unsigned int point = 
        julia(cx,cy) + (julia(cx,cy + STEP) * 2);   

      putchar(point == 3 ? ':' : (point == 2 ? '\'' : 
        (point == 1 ? '.' : ' ')));

      cx += STEP;


    cy -= 2 * STEP;

  return 0;


Just Werks

"Just werks" (for "just works" if that's somehow not clear) is a phrase used by noobs to justify using a piece of technology while completely neglecting any other deeper and/or long term consequences. A noob doesn't think about technology further than how it can immediately perform some task for him.

This phrase is widely used on 4chan/g, it probably originated there.

The "just werks" philosophy completely ignores questions such as:

See Also



Kek means lol. It comes from World of Warcraft where the two opposing factions (Horde and Alliance) were made to speak mutually unintelligibile languages so as to prevent enemy players from communicating; when someone from Horde typed "lol", an Alliance player would see him say "kek". The other way around (i.e. Alliance speaking to Horde) would render "lol" as "bur", however kek became the popular one. On the Internet this further mutated to forms like kik, kekw, topkek etc. Nowadays in some places such as 4chan kek seems to be used even more than lol, it's the newer, "cooler" way of saying lol.

See Also


Kids These Days




KISS (Keep It Simple, Stupid!) is a design philosophy that favors simplicity, solutions that are as simple as possible to achieve